Moral Competence in Computational Architectures for Robots

Moral competence in Computational Architectures for Robots is a Multidisciplinary University Research Initiative (MURI), awarded in 2014 by the Department of Defense (via the Office of Naval Research) to a team of researchers from Tufts University, Brown University, RPI, Georgetown University, and Yale University. Its purpose is to identify the logical, cognitive, and social underpinnings of human moral competence, model those principles of competence in human-robot interactions, and demonstrate novel computational means by which robots can reason and act ethically in the face of complex, practical challenges.

Robotics, Humanity, and Ethics

Throughout history, technology has opened up exciting opportunities and daunting challenges for attempts to meet human needs. In recent history, needs that arise in households, hospitals, and on highways are met in part by technological advances in automation, digitization, and networking, but they also pose serious challenges to human values of autonomy, privacy, and equality. Robotics is one of the liveliest fields where growing sophistication could meet societal needs, such as when robotic devices provide social companionship or perform highly delicate surgery. As computational resources and engineering grow in complexity, subtlety, and range, it is important to anticipate, investigate, and as much as possible demonstrate how robots can best act in accordance with ethical norms in the face of complex, demanding situations.

The intricacy of ethics, whether within the human mind or in society at large, appears to some commentators to render the idea of robots as moral agents inconceivable, or at least inadvisable to pursue. Questions about what actions robots would have the leeway, direction, or authority to carry out have elicited both fears and fantasies. 

This MURI project engages these issues and possibilities by seeking to integrate research, design, and engineering with continued questions of social responsibility. The project will not import a particular set of morals nor will it favor a specific meta-ethical theory; instead, the project will design computational architecture that incorporates processes of affect, reasoning, and social understanding, in an attempt to do justice to the dynamic,multi-layered character of social and moral life.

The applications of this project are not limited to any particular context of human-robot interaction, whether health care, business, education, or military. The empirical and theoretical investigations of moral competence promise to illuminate, however, those life settings where robots might best embody and support ethical action and thereby meet human needs. 

This work on human and robot moral competence must do justice to the seriousness of ethics, recognizing that matters of life, death, and suffering call for strong, clear safeguards rather than loose theoretical probabilities or provisional  calculations. For robotic design to achieve these expectations it must grow out of the varied insights, methods, and probing dialogues of an inter-disciplinary collaboration, which this MURI enables. 

Project Description

I. Foundations of Moral Competence

Designing robots to achieve moral competence naturally relies on the many layers and connections of concepts, rules, feelings, and judgments that constitute human morality. This project will therefore map out crucial concepts and processes that underlie an agent's ability to interact morally with other agents. This will entail detailed examination of how moral norms are represented and operate, what minimal vocabulary is needed for communication on moral matters, and which cognitive processes underlie judgments and decisions about moral action. Accompanying these efforts will be a robust treatment of affective processes that help generate moral perception, action, and communication. The project then will use these grounded insights to inform thorough logical and mathematical elaborations of modes of inference critical for moral assessments, including deductive, analogical, and counterfactual operations.  In these ways the project uncovers the real-life conditions of moral competence while identifying the logical operations and principles that represent and communicate norms and values effectively.

II. Moral Reasoning and Decision-Making

The project will produce and rely on an unprecedented hierarchy (H) of formal systems for robot reasoning and decision-making in the realm of ethics, including natural-language understanding (U) and generation(G) technology that not only allows a robot to understand human commands and queries about ethics, but also to generate proofs and arguments that justify its ethical reasoning and decision-making to humans.  The first and fastest part of H is designed to simply flag ethically charged situations, and is inspired by UIMA, of Jeopardy!-winning Watson fame. The next level relies on a hybrid form of reasoning that blends deduction and analogy. The remaining levels of H become increasingly deliberative and expressive. These levels will be multi-operator intensional computational logics, into which U can inject information provided by English-speaking humans,and out of which G can draw and present information to these same humans.  H, U, and G will in large measure be provided as open-source resources to a world increasingly (and rightly so!) concerned with the ethical status of actions performed by autonomous robots and other autonomous artificial agents.

III. Computational Architecture

This project takes up the challenge of developing a computational architecture that integrates the different perceptual and logical systems a moral agent needs to master real-life, ethically charged situations. For autonomous systems in particular, this requires an architecture that can manage multiple and parallel components of detection, discernment, and abstraction when encountering physical objects in the environment, natural language, subtle gestures of face and hands, and expressed goals, values, and reasons on the part of other agents.  The project will use the cognitive robotic DIARC architecture as an architectural framework that has shown itself uniquely positioned to provide scalable, adaptive, and robotically compatible means for cognition, action, and communication.

IV. Human-Robot Interaction Demonstrations

Throughout its work on moral competence, this project will also make theoretical and empirical advances on the specific challenges and conditions that human-robot interactions pose. This means gaining a more confident understanding of how humans react to robots, in what ways and for what reasons they deny or grant moral agency to robots, and where robots may even exceed ordinary moral competence. Research into those challenges and conditions will guide the development of computational architecture as well as the physical appearance and mobility of robots. The project will culminate in rigorous, demanding demonstrations of robots succeeding at reasoning, reacting, and acting ethically amid shifting conditions, conflicting goals, and competing values. 

MURI-related HRILab Publications

daniel kasenberg and vasanth sarathy and thomas arnold and matthias scheutz and tom williams (2018)
quasi-dilemmas for artificial moral agents
international conference on robot ethics and standards
bibtex citation

daniel kasenberg and matthias scheutz (2018)
norm conflict resolution in stochastic domains
proceedings of the thirty-second aaai conference on artificial intelligence
bibtex citation

bertram malle and stuti thapa magar and matthias scheutz (2017)
ai in the sky
international conference on robot ethics and safety standards
bibtex citation

lisa fan and matthias scheutz and monika lohani and marissa mccoy and charlene stokes (2017)
do we need emotionally intelligent articial agents? first results of human perceptions of emotional intelligence in humans compared to robots
proceedings of the seventeenth international conference on intelligent virtual agents
bibtex citation

vasanth sarathy and matthias scheutz and bertram malle (2017)
learning behavioral norms in uncertain and changing contexts
proceedings of the 2017 8th ieee international conference on cognitive infocommunications (coginfocom)
bibtex citation

matthias scheutz and scott deloach and julie adams (2017)
a framework for developing and using shared mental models in human-agent teams
journal of cognitive engineering and decision making, 11, 3, 203--224
bibtex citation

thomas arnold and matthias scheutz (2017)
the tactile ethics of soft robotics: designing wisely for human--robot interaction
soft robotics, 4, 2, 81--87
bibtex citation

vasanth sarathy and matthias scheutz and joseph austerweil and yoed kenett and mowafak allaham and bertram malle (2017)
mental representations and computational modeling of context-specific human norm systems
proceedings of cognitive science
bibtex citation

kartik talamadupula and gordon briggs and matthias scheutz and subbarao kambhampti (2017)
architectural mechanisms for handling human instructions for open-world mixed-initiative team tasks and goals
advances in cognitive systems, 5
bibtex citation

gordon briggs and tom williams and matthias scheutz (2017)
enabling robots to understand indirect speech acts in task-based interactions
journal of human-robot interaction, 6, 1, 64--94
bibtex citation

matthias scheutz and evan krause and brad oosterveld and tyler frasca and robert platt (2017)
spoken instruction-based one-shot object and action learning in a cognitive robotic architecture
proceedings of the 16th international conference on autoomous agents and multiagent systems
recipient of the aamas 2017 best paper award
bibtex citation

thomas arnold and matthias scheutz (2017)
beyond moral dilemmas: exploring the ethical landscape in hri
proceedings of the 2017 acm/ieee international conference on human-robot interaction, 445--452
bibtex citation

thomas arnold and daniel kasenberg and matthias scheutz (2017)

value alignment or misalignment -- what will keep systems accountable?
aaai workshop on ai, ethics, and society
bibtex citation

bertram f. malle and matthias scheutz and jodi forlizzi and john t. voiklis (2016)
which robot am i thinking about? the impact of action and appearance on people's evaluations of a moral robot
proceedings of 11th acm/ieee international conference on human-robot interaction
bibtex citation

jason r. wilson and nah young lee and annie saechao and matthias scheutz (2016)
autonomy and dignity: principles in designing effective social robots to assist in the care of older adults
workshop: using social robots to improve the quality of life in the elderly
bibtex citation

jason r. wilson and matthias scheutz and gordon briggs (2016)
reflections on the design challenges prompted by affect-aware socially assistive robots
emot. personal. pers. syst.
bibtex citation
matthias scheutz and thomas arnold (2016)
feats without heroes: norms, means, and ideal robotic action
frontiers in robotics and ai, 3, 32
bibtex citation

thomas arnold and matthias scheutz (2016)
against the moral turing test: accountable design and the moral reasoning of autonomous systems
ethics and information technology, 18, 2, 103--115
bibtex citation

jason r. wilson and thomas arnold and matthias scheutz (2016)
relational enhancement: a framework for evaluating and designing human-robot relationships
proceedings of the aaai workshop on ai, ethics, and society
bibtex citation

vasanth sarathy and matthias scheutz (2016)
cognitive affordance representations in uncertain logic
proceedings of the 15th international conference on principles of knowledge representation and reasoning (kr)
bibtex citation

vasanth sarathy and jason wilson and thomas arnold and matthias scheutz (2016)
enabling basic normative hri in a cognitive robotic architecture
proceedings of the 2nd workshop on cognitive architectures for social human-robot interaction at the 11th acm/ieee conference on human-robot interaction

bibtex citation

gordon briggs and matthias scheutz (2015)

"sorry, i can't do that:" developing mechanisms to appropriately reject directives in human-robot interactions
proceedings of the 2015 aaai fall symposium on ai and hri
bibtex citation

bertram malle and matthias scheutz and joe austerweil (2015)

networks of social and moral norms in human and robot agents
proceedings of international conference on robot ethics
bibtex citation

matthias scheutz and bertram malle and gordon briggs (2015)
towards morally sensitive action selection for autonomous social robots
proceedings of ro-man
bibtex citation

bertram malle and matthias scheutz (2015)
when will people regard robots as morally competent social partners?
proceedings of ro-man
bibtex citation

jason r. wilson and matthias scheutz (2015)
a model of empathy to shape trolley problem moral judgements
the sixth international conference on affective computing and intelligent interaction
bibtex citation

bertram f. malle and matthias scheutz and thomas m. arnold and john t. voiklis and corey cusimano (2015)

sacrifice one for the good of many? people apply different moral norms to human and robot agents
proceedings of 10th acm/ieee international conference on human-robot interaction

recipient of the hri-2015 best paper award in new knowledge
bibtex citation

matthias scheutz (2014)
the need for moral competency in autonomous agent architectures
fundamental issues of artificial intelligence
bibtex citation

matthias scheutz and bertram f malle (2014)

"think and do the right thing" - a plea for morally competent autonomous robots
proceedings of ieee international symposium on ethics in engineering, science, and technology (ethics)
bibtex citation

bertram f malle and matthias scheutz (2014)
moral competence in social robots
proceedings of ieee international symposium on ethics in engineering, science, and technology (ethics)
bibtex citation

gordon briggs and matthias scheutz (2014)
modeling blame to avoid positive face threats in natural language generation
proceedings of the eighth international natural language generation conference
bibtex citation

gordon briggs and matthias scheutz (2014)
how robots can affect human behavior: investigating the effects of robotic displays of protest and distress
international journal of social robotics, 6, 1--13
bibtex citation

gordon briggs and bryce gessell and matthew dunlap and matthias scheutz (2014)
actions speak louder than looks: does robot appearance affect human reactions to robot protest and distress?
proceedings of 23rd ieee symposium on robot and human interactive communication (ro-man)
bibtex citation

megan strait and matthias scheutz (2014)

using functional near infrared spectroscopy to measure moral decision-making: effects of agency, emotional value, and monetary incentive
brain-computer interfaces, 1--10, 1
bibtex citation

matthias scheutz (2012)
the affect dilemma for artificial agents: should we develop affective artificial agents?
ieee transactions on affective computing, 3, 424--433
bibtex citation

paul schermerhorn and matthias scheutz (2011)
disentangling the effects of robot affect, embodiment, and autonomy on human team members in a mixed-initiative task
proceedings of the 2011 international conference on advances in computer-human interactions, 236--241
bibtex citation

Other MURI Team Sites
Moral Reasoning & Decision-Making: ONR: MURI/Moral Dilemmas | Rensselaer Artificial Intelligence and Reasoning Laboratory

Brown University Humanity Centered Robotics Initiative � The next generation of human-machine technology

Recent MURI Articles and Work Discussed in the Press
Companion Robots Are Here. Just Don't Fall in Love With Them | WIRED
Designing soft robots: Ethics-based guidelines for human-robot interactions
Teaching robots right from wrong | Science News for Students
Teaching robots how to trust | TechCrunch
Robots Learn to Say "No" to Humans [Demo Included] | ColdFusion - YouTube
Tufts University researchers programmed two robots to look out for one another's safety � Quartz
Why Robots Need to be Able to Say 'No' | TIME
Integrating robot ethics and machine morality: the study and design of moral competence in robots

NHK (Japan) Documentary on Robotics and Artificial Intelligence
World Science Festival: Robot dilemmas and driverless cars that could kill you
Quartz - This robot is learning how to say "no" to humans | Facebook
Researchers Aim to Teach Robots How and When to Say 'No' to Humans - NBC News
Researchers Teaching Robots How to Best Reject Orders from Humans - IEEE Spectrum
This Robot Has Learned How to Say 'No' to Human Demands | Inverse
Der Roboter muss Menschen korrigieren koennen" - Forschung Spezial -
Experte fordert "moralischen Kompass" fur Roboter - Hilfe im Konflikt - Digital -
Moralischer Kompass fur Roboter gesucht -
"Ob uns Technologie glucklich macht, liegt an uns" -
Auch Maschinen brauchen Moral -
Forum Alpbach: Autonome Roboter brauchen moralischen Kompass |
Robots pass 'wise-men puzzle' to show a degree of self-awareness

MURI Grant Announcements in the Press
Teaching Robots Right from Wrong | Tufts Now
Can robots learn right from wrong? | News from Brown
Brown Alumni Magazine - Right from Wrong
Rensselaer Professors Part of Team Funded To Teach Robots Right From Wrong | News & Events
The Military Wants to Teach Robots Right From Wrong - The Atlantic
U.S. military wants to teach robots how to make moral and ethical decisions | Impact Lab
US military begins research into moral, ethical robots, to stave off Skynet-like apocalypse | ExtremeTech
Can we delegate moral decisions to robots? -- GCN
Can robots be trusted to know right from wrong? | KurzweilAI
Video of Stephen Colbert Talking Morality Lessons for Robots | BostInno
US Navy funds morality lessons for robots (Wired UK)
Researchers Want to Teach Robots Right From Wrong | Re/code
Programming the moral robot | ROUGH TYPE
Moral Robots for the Pentagon? Let's Work on Pentagon Morality First | Truth-out
U.S. Navy trying to build robots with morals. Sense of right or wrong for $7.5 million?: Tech Times
Scientists try to teach robots morality | Gizmag
The Military Wants to Build 'Moral' Robots -- And Yes, That's Scary | VICE News
Can Killer Robots Learn to Follow the Rules of War? | Innovation | Smithsonian
Robots Could Get their Own Sense of Morality--TechEmergence
Now The Military Is Going To Build Robots That Have Morals -- Defense One

Demo Videos
Teaching robots to trust - YouTube

Simple moral reasoning
Two Bots, One Brain: Component Sharing in Cognitive Robotic Architectures - YouTube

Related Articles
Do We Have Moral Obligations to Robots? | JSTOR Daily
Robot ethics: Morals and the machine | The Economist
The Moral Hazards and Legal Conundrums of Our Robot-Filled Future | Science | WIRED
Robotics: the rise of the (more ethical) machines -
Teaching Robots To Behave Ethically
Death by Robot -
We are building superhuman robots. Will they be heroes, or villains? - The Washington Post
When a moral machine is better than a flawed human being -

How to Raise a Moral Robot
What Should A Robot Do? Designing Robots That Know Right From Wrong
Already Anticipating 'Terminator' Ethics -
New research will help robots know their limits
The Future of Robot Caregivers -
An ethical dilemma: When robot cars must kill, who should pick the victim? | Robohub
You Should Have a Say in Your Robot Car's Code of Ethics | Opinion | WIRED
Automata: a believable robot future | KurzweilAI

Why it is not possible to regulate robots | Technology | The Guardian
RoboLaw: Why and how to regulate robotics | Robohub
Researchers Scrambling To Build Ebola-Fighting Robots - Slashdot

BBC - Future - Is it OK to torture or murder a robot?
NASA: Robots are our friends | Computerworld
'Ethical AI' could have thwarted deadly crash - Times Union
Chappie robot ethics: The film raises interesting questions about morality.

Last Modified- September 6, 2018