Moral Competence in Computational Architectures for Robots

Moral competence in Computational Architectures for Robots is a Multidisciplinary University Research Initiative (MURI), awarded in 2014 by the Department of Defense (via the Office of Naval Research) to a team of researchers from Tufts University, Brown University, RPI, Georgetown University, and Yale University. Its purpose is to identify the logical, cognitive, and social underpinnings of human moral competence, model those principles of competence in human-robot interactions, and demonstrate novel computational means by which robots can reason and act ethically in the face of complex, practical challenges.

Robotics, Humanity, and Ethics

Throughout history, technology has opened up exciting opportunities and daunting challenges for attempts to meet human needs. In recent history, needs that arise in households, hospitals, and on highways are met in part by technological advances in automation, digitization, and networking, but they also pose serious challenges to human values of autonomy, privacy, and equality. Robotics is one of the liveliest fields where growing sophistication could meet societal needs, such as when robotic devices provide social companionship or perform highly delicate surgery. As computational resources and engineering grow in complexity, subtlety, and range, it is important to anticipate, investigate, and as much as possible demonstrate how robots can best act in accordance with ethical norms in the face of complex, demanding situations.

The intricacy of ethics, whether within the human mind or in society at large, appears to some commentators to render the idea of robots as moral agents inconceivable, or at least inadvisable to pursue. Questions about what actions robots would have the leeway, direction, or authority to carry out have elicited both fears and fantasies. 

This MURI project engages these issues and possibilities by seeking to integrate research, design, and engineering with continued questions of social responsibility. The project will not import a particular set of morals nor will it favor a specific meta-ethical theory; instead, the project will design computational architecture that incorporates processes of affect, reasoning, and social understanding, in an attempt to do justice to the dynamic,multi-layered character of social and moral life.

The applications of this project are not limited to any particular context of human-robot interaction, whether health care, business, education, or military. The empirical and theoretical investigations of moral competence promise to illuminate, however, those life settings where robots might best embody and support ethical action and thereby meet human needs. 

This work on human and robot moral competence must do justice to the seriousness of ethics, recognizing that matters of life, death, and suffering call for strong, clear safeguards rather than loose theoretical probabilities or provisional  calculations. For robotic design to achieve these expectations it must grow out of the varied insights, methods, and probing dialogues of an inter-disciplinary collaboration, which this MURI enables. 

Project Description

I. Foundations of Moral Competence

Designing robots to achieve moral competence naturally relies on the many layers and connections of concepts, rules, feelings, and judgments that constitute human morality. This project will therefore map out crucial concepts and processes that underlie an agent's ability to interact morally with other agents. This will entail detailed examination of how moral norms are represented and operate, what minimal vocabulary is needed for communication on moral matters, and which cognitive processes underlie judgments and decisions about moral action. Accompanying these efforts will be a robust treatment of affective processes that help generate moral perception, action, and communication. The project then will use these grounded insights to inform thorough logical and mathematical elaborations of modes of inference critical for moral assessments, including deductive, analogical, and counterfactual operations.  In these ways the project uncovers the real-life conditions of moral competence while identifying the logical operations and principles that represent and communicate norms and values effectively.

II. Moral Reasoning and Decision-Making

The project will produce and rely on an unprecedented hierarchy (H) of formal systems for robot reasoning and decision-making in the realm of ethics, including natural-language understanding (U) and generation(G) technology that not only allows a robot to understand human commands and queries about ethics, but also to generate proofs and arguments that justify its ethical reasoning and decision-making to humans.  The first and fastest part of H is designed to simply flag ethically charged situations, and is inspired by UIMA, of Jeopardy!-winning Watson fame. The next level relies on a hybrid form of reasoning that blends deduction and analogy. The remaining levels of H become increasingly deliberative and expressive. These levels will be multi-operator intensional computational logics, into which U can inject information provided by English-speaking humans,and out of which G can draw and present information to these same humans.  H, U, and G will in large measure be provided as open-source resources to a world increasingly (and rightly so!) concerned with the ethical status of actions performed by autonomous robots and other autonomous artificial agents.

III. Computational Architecture

This project takes up the challenge of developing a computational architecture that integrates the different perceptual and logical systems a moral agent needs to master real-life, ethically charged situations. For autonomous systems in particular, this requires an architecture that can manage multiple and parallel components of detection, discernment, and abstraction when encountering physical objects in the environment, natural language, subtle gestures of face and hands, and expressed goals, values, and reasons on the part of other agents.  The project will use the cognitive robotic DIARC architecture as an architectural framework that has shown itself uniquely positioned to provide scalable, adaptive, and robotically compatible means for cognition, action, and communication.

IV. Human-Robot Interaction Demonstrations

Throughout its work on moral competence, this project will also make theoretical and empirical advances on the specific challenges and conditions that human-robot interactions pose. This means gaining a more confident understanding of how humans react to robots, in what ways and for what reasons they deny or grant moral agency to robots, and where robots may even exceed ordinary moral competence. Research into those challenges and conditions will guide the development of computational architecture as well as the physical appearance and mobility of robots. The project will culminate in rigorous, demanding demonstrations of robots succeeding at reasoning, reacting, and acting ethically amid shifting conditions, conflicting goals, and competing values. 

MURI-related HRILab Publications

matthias scheutz and thomas arnold (2016)
feats without heroes: norms, means, and ideal robotic action
frontiers in robotics and ai, 3, 32
bibtex citation

thomas arnold and matthias scheutz (2016)
against the moral turing test: accountable design and the moral reasoning of autonomous systems
ethics and information technology, 18, 2, 103--115
bibtex citation

jason r. wilson and thomas arnold and matthias scheutz (2016)
relational enhancement: a framework for evaluating and designing human-robot relationships
proceedings of the aaai workshop on ai, ethics, and society
bibtex citation

vasanth sarathy and matthias scheutz (2016)
cognitive affordance representations in uncertain logic
proceedings of the 15th international conference on principles of knowledge representation and reasoning (kr)
bibtex citation

vasanth sarathy and jason wilson and thomas arnold and matthias scheutz (2016)
enabling basic normative hri in a cognitive robotic architecture
proceedings of the 2nd workshop on cognitive architectures for social human-robot interaction at the 11th acm/ieee conference on human-robot interaction

bibtex citation

gordon briggs and matthias scheutz (2015)

"sorry, i can't do that:" developing mechanisms to appropriately reject directives in human-robot interactions
proceedings of the 2015 aaai fall symposium on ai and hri
bibtex citation

bertram malle and matthias scheutz and joe austerweil (2015)

networks of social and moral norms in human and robot agents
proceedings of international conference on robot ethics
bibtex citation

matthias scheutz and bertram malle and gordon briggs (2015)
towards morally sensitive action selection for autonomous social robots
proceedings of ro-man
bibtex citation

bertram malle and matthias scheutz (2015)
when will people regard robots as morally competent social partners?
proceedings of ro-man
bibtex citation

jason r. wilson and matthias scheutz (2015)
a model of empathy to shape trolley problem moral judgements
the sixth international conference on affective computing and intelligent interaction
bibtex citation

bertram f. malle and matthias scheutz and thomas m. arnold and john t. voiklis and corey cusimano (2015)

sacrifice one for the good of many? people apply different moral norms to human and robot agents
proceedings of 10th acm/ieee international conference on human-robot interaction

recipient of the hri-2015 best paper award in new knowledge
bibtex citation

matthias scheutz (2014)
the need for moral competency in autonomous agent architectures
fundamental issues of artificial intelligence
bibtex citation

matthias scheutz and bertram f malle (2014)

"think and do the right thing" - a plea for morally competent autonomous robots
proceedings of ieee international symposium on ethics in engineering, science, and technology (ethics)
bibtex citation

bertram f malle and matthias scheutz (2014)
moral competence in social robots
proceedings of ieee international symposium on ethics in engineering, science, and technology (ethics)
bibtex citation

gordon briggs and matthias scheutz (2014)
modeling blame to avoid positive face threats in natural language generation
proceedings of the eighth international natural language generation conference
bibtex citation

gordon briggs and matthias scheutz (2014)
how robots can affect human behavior: investigating the effects of robotic displays of protest and distress
international journal of social robotics, 6, 1--13
bibtex citation

gordon briggs and bryce gessell and matthew dunlap and matthias scheutz (2014)
actions speak louder than looks: does robot appearance affect human reactions to robot protest and distress?
proceedings of 23rd ieee symposium on robot and human interactive communication (ro-man)
bibtex citation

megan strait and matthias scheutz (2014)

using functional near infrared spectroscopy to measure moral decision-making: effects of agency, emotional value, and monetary incentive
brain-computer interfaces, 1--10, 1
bibtex citation

matthias scheutz (2012)
the affect dilemma for artificial agents: should we develop affective artificial agents?
ieee transactions on affective computing, 3, 424--433
bibtex citation

paul schermerhorn and matthias scheutz (2011)
disentangling the effects of robot affect, embodiment, and autonomy on human team members in a mixed-initiative task
proceedings of the 2011 international conference on advances in computer-human interactions, 236--241
bibtex citation

Other MURI Team Sites
Moral Reasoning & Decision-Making: ONR: MURI/Moral Dilemmas | Rensselaer Artificial Intelligence and Reasoning Laboratory

Recent MURI Articles
Why Robots Need to be Able to Say 'No' | TIME
Integrating robot ethics and machine morality: the study and design of moral competence in robots - Online First - Springer

Recent MURI Work in the Press
NHK (Japan) Documentary on Robotics and Artificial Intelligence
World Science Festival: Robot dilemmas and driverless cars that could kill you
[Video] Quartz - This robot is learning how to say "no" to humans | Facebook
Researchers Aim to Teach Robots How and When to Say 'No' to Humans - NBC News
Researchers Teaching Robots How to Best Reject Orders from Humans - IEEE Spectrum
This Robot Has Learned How to Say 'No' to Human Demands | Inverse
Der Roboter muss Menschen korrigieren koennen" - Forschung Spezial - - Wissenschaft
Experte fordert "moralischen Kompass" fur Roboter - Hilfe im Konflikt - Digital -
Moralischer Kompass fur Roboter gesucht -
"Ob uns Technologie glucklich macht, liegt an uns" -
Auch Maschinen brauchen Moral -
Forum Alpbach: Autonome Roboter brauchen moralischen Kompass |
Robots pass 'wise-men puzzle' to show a degree of self-awareness

MURI Grant Announcement in the Press

Teaching Robots Right from Wrong | Tufts Now
Can robots learn right from wrong? | News from Brown
Brown Alumni Magazine - Right from Wrong
Rensselaer Professors Part of Team Funded To Teach Robots Right From Wrong | News & Events
The Military Wants to Teach Robots Right From Wrong - The Atlantic
U.S. military wants to teach robots how to make moral and ethical decisions | Impact Lab
US military begins research into moral, ethical robots, to stave off Skynet-like apocalypse | ExtremeTech
Can we delegate moral decisions to robots? -- GCN
Can robots be trusted to know right from wrong? | KurzweilAI
Video of Stephen Colbert Talking Morality Lessons for Robots | BostInno
US Navy funds morality lessons for robots (Wired UK)
Researchers Want to Teach Robots Right From Wrong | Re/code
Programming the moral robot | ROUGH TYPE
Moral Robots for the Pentagon? Let's Work on Pentagon Morality First | Truth-out
U.S. Navy trying to build robots with morals. Sense of right or wrong for $7.5 million?: Tech Times
Scientists try to teach robots morality | Gizmag
The Military Wants to Build 'Moral' Robots -- And Yes, That's Scary | VICE News
Can Killer Robots Learn to Follow the Rules of War? | Innovation | Smithsonian
Robots Could Get their Own Sense of Morality--TechEmergence
Now The Military Is Going To Build Robots That Have Morals -- Defense One

Related Articles

Robot ethics: Morals and the machine | The Economist
The Moral Hazards and Legal Conundrums of Our Robot-Filled Future | Science | WIRED
Robotics: the rise of the (more ethical) machines -

Teaching Robots To Behave Ethically
Death by Robot -
We are building superhuman robots. Will they be heroes, or villains? - The Washington Post
When a moral machine is better than a flawed human being -

How to Raise a Moral Robot
What Should A Robot Do? Designing Robots That Know Right From Wrong
Already Anticipating 'Terminator' Ethics -
New research will help robots know their limits
The Future of Robot Caregivers -
An ethical dilemma: When robot cars must kill, who should pick the victim? | Robohub
You Should Have a Say in Your Robot Car's Code of Ethics | Opinion | WIRED
Automata: a believable robot future | KurzweilAI

Why it is not possible to regulate robots | Technology | The Guardian
RoboLaw: Why and how to regulate robotics | Robohub
Researchers Scrambling To Build Ebola-Fighting Robots - Slashdot

BBC - Future - Is it OK to torture or murder a robot?
NASA: Robots are our friends | Computerworld
'Ethical AI' could have thwarted deadly crash - Times Union
Chappie robot ethics: The film raises interesting questions about morality.

Demo Video
Simple moral reasoning

Last Modified- April 8, 2016