Moral Competence in Computational Architectures for Robots is a Multidisciplinary University Research Initiative (MURI), awarded in 2014 by the Department of Defense (via the Office of Naval Research) to a team of researchers from Tufts University, Brown University, RPI, Georgetown University, and Yale University. Its purpose is to identify the logical, cognitive, and social underpinnings of human moral competence, model those principles of competence in human-robot interactions, and demonstrate novel computational means by which robots can reason and act ethically in the face of complex, practical challenges.
For full summary see the Summary page...
Designing robots to achieve moral competence naturally relies on the many layers and connections of concepts, rules, feelings, and judgments that constitute human morality. This project will therefore map out crucial concepts and processes that underlie an agent's ability to interact morally with other agents. This will entail detailed examination of how moral norms are represented and operate, what minimal vocabulary is needed for communication on moral matters, and which cognitive processes underlie judgments and decisions about moral action. Accompanying these efforts will be a robust treatment of affective processes that help generate moral perception, action, and communication. The project then will use these grounded insights to inform thorough logical and mathematical elaborations of modes of inference critical for moral assessments, including deductive, analogical, and counterfactual operations. In these ways the project uncovers the real-life conditions of moral competence while identifying the logical operations and principles that represent and communicate norms and values effectively.
The project will produce and rely on an unprecedented hierarchy (H) of formal systems for robot reasoning and decision-making in the realm of ethics, including natural-language understanding (U) and generation(G) technology that not only allows a robot to understand human commands and queries about ethics, but also to generate proofs and arguments that justify its ethical reasoning and decision-making to humans. The first and fastest part of H is designed to simply flag ethically charged situations, and is inspired by UIMA, of Jeopardy!-winning Watson fame. The next level relies on a hybrid form of reasoning that blends deduction and analogy. The remaining levels of H become increasingly deliberative and expressive. These levels will be multi-operator intensional computational logics, into which U can inject information provided by English-speaking humans, and out of which G can draw and present information to these same humans. H, U, and G will in large measure be provided as open-source resources to a world increasingly (and rightly so!) concerned with the ethical status of actions performed by autonomous robots and other autonomous artificial agents.
This project takes up the challenge of developing a computational architecture that integrates the different perceptual and logical systems a moral agent needs to master real-life, ethically charged situations. For autonomous systems in particular, this requires an architecture that can manage multiple and parallel components of detection, discernment, and abstraction when encountering physical objects in the environment, natural language, subtle gestures of face and hands, and expressed goals, values, and reasons on the part of other agents. The project will use the cognitive robotic DIARC architecture as an architectural framework that has shown itself uniquely positioned to provide scalable, adaptive, and robotically compatible means for cognition, action, and communication.
Throughout its work on moral competence, this project will also make theoretical and empirical advances on the specific challenges and conditions that human-robot interactions pose. This means gaining a more confident understanding of how humans react to robots, in what ways and for what reasons they deny or grant moral agency to robots, and where robots may even exceed ordinary moral competence. Research into those challenges and conditions will guide the development of computational architecture as well as the physical appearance and mobility of robots. The project will culminate in rigorous, demanding demonstrations of robots succeeding at reasoning, reacting, and acting ethically amid shifting conditions, conflicting goals, and competing values.
Bertram Malle, and Matthias Scheutz (2014) Moral competence in social robots proceedings of ieee international symposium on ethics in engineering, science, and technology (ethics)
Bertram Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano (2015) Sacrifice one for the good or many? People apply different moral norms to human and robot agents proceedings of 10th acm/ieee international conference on human-robot interaction
See more project publications on the Publications page
Designing robots to achieve moral competence naturally relies on the many layers and connections of concepts, rules, feelings, and judgments that constitute human morality. This project will therefore map out crucial concepts and processes that underlie an agent's ability to interact morally with other agents. This will entail detailed examination of how moral norms are represented and operate, what minimal vocabulary is needed for communication on moral matters, and which cognitive processes underlie judgments and decisions about moral action. Accompanying these efforts will be a robust treatment of affective processes that help generate moral perception, action, and communication. The project then will use these grounded insights to inform thorough logical and mathematical elaborations of modes of inference critical for moral assessments, including deductive, analogical, and counterfactual operations. In these ways the project uncovers the real-life conditions of moral competence while identifying the logical operations and principles that represent and communicate norms and values effectively.
Thomas Arnold and Matthias Scheutz (2018). HRI ethics and type-token ambiguity: What kind of robotic identity is most responsible? Ethics and Information Technology
Bertram Malle, Stuti Thapa Magar, and Matthias Scheutz (2017). AI in the sky international conference on robot ethics and safety standards
Jason Wilson, Nah Young Lee, Annie Saechao, and Matthias Scheutz (2016) Autonomy and dignity: Principles in designing effective social robots to assist in the care of older adults workshop: using social robots to improve the quality of life in the elderly
Thomas Arnold, and Matthias Scheutz (2016) Against the Moral Turing test: Accountable design and the moral reasoning of autonomous systems ethics and information technology, 18, 2, 103--115
Bertram Malle, and Matthias Scheutz (2015) When will people regard robots as normally competent social partners? proceedings of ro-man
Kristen E. Clark (2015) How To Build a Moral Robot cuny academic works
Matthias Scheutz (2014) The need for moral competency in autonomous agent architectures fundamental issues of artificial intelligence
Matthias Scheutz, and Bertram Malle (2014) “Think and do the right thing” – A plea for morally competent autonomous robots proceedings of ieee international symposium on ethics in engineering, science, and technology (ethics)
Bertram Malle, and Matthias Scheutz (2014) Moral competence in social robots proceedings of ieee international symposium on ethics in engineering, science, and technology (ethics)
Matthias Scheutz (2012) The affect dilemma for artificial agents: Should we develop affective artificial agents? ieee transactions on affective computing, 3, 424--433
The project will produce and rely on an unprecedented hierarchy (H) of formal systems for robot reasoning and decision-making in the realm of ethics, including natural-language understanding (U) and generation(G) technology that not only allows a robot to understand human commands and queries about ethics, but also to generate proofs and arguments that justify its ethical reasoning and decision-making to humans. The first and fastest part of H is designed to simply flag ethically charged situations, and is inspired by UIMA, of Jeopardy!-winning Watson fame. The next level relies on a hybrid form of reasoning that blends deduction and analogy. The remaining levels of H become increasingly deliberative and expressive. These levels will be multi-operator intensional computational logics, into which U can inject information provided by English-speaking humans, and out of which G can draw and present information to these same humans. H, U, and G will in large measure be provided as open-source resources to a world increasingly (and rightly so!) concerned with the ethical status of actions performed by autonomous robots and other autonomous artificial agents.
Vasanth Sarathy (2019) Learning context-sensitive norms under uncertainty proceedings of the 2nd aai/acm workshop on artificial intelligence, ethics, and society (aies-19)
Vasanth Sarathy, Bradley Oosterveld, Evan Krause, and Matthias Scheutz (2018) Learning cognitive affordances for objects from natural language instruction proceedings of the sixth annual conference on advances in cognitive systems
Daniel Kasenberg, Vasanth Sarathy, Thomas Arnold, Matthias Scheutz, and Tom Williams (2018) Quasi-dilemmas for artificial moral agents international conference on robot ethics and standards
Daniel Kasenberg, and Matthias Scheutz (2018) Norm conflict resolution in stochastic domains proceedings of the thirty-second aaai conference on artificial intelligence
Vasanth Sarathy, Matthias Scheutz, and Bertram Malle (2017) Learning behavioral norms in uncertain and changing contexts proceedings of the 2017 8th ieee international conference on cognitive infocommunications (coginfocom)
Matthias Scheutz, Scott Deloach, and Julie Adams (2017) A framework for developing and using shared mental models in human-agent teams journal of cognitive engineering and decision making, 11, 3, 203--224
Gordon Briggs, Tom Williams, and Matthias Scheutz (2017) Enabling robots to understand indirect speech acts in task-based interactions journal of human-robot interaction, 6, 1, 644
Thomas Arnold, Daniel Kasenberg, and Matthias Scheutz (2017) Value alignment or misalignment--what will keep systems accountable? aaai workshop on ai, ethics, and society?
Matthias Scheutz, and Thomas Arnold (2016) Feats without heroes: Norms, means, and ideal robotic action frontiers in robotics and ai, 3, 32
Jason Wilson, Thomas Arnold, and Matthias Scheutz (2016) Relational enhancement: A framework for evaluating and designing human-robot relationships proceedings of the aaai workshop on ai, ethics, and society
Gordon Briggs, and Matthias Scheutz (2015) "Sorry, I can't do that:" Developing mechanisms to appropriately reject directives in human-robot interactions proceedings of the 2015 aaai fall symposium on ai and hri
Bertram Malle, Matthias Scheutz, and Joe Austerweil (2015) Networks of social and moral norms in human and robot agents proceedings of international conference on robot ethics
Matthias Scheutz, Bertram Malle, and Gordon Briggs (2015) Towards morally sensitive action selection for autonomous social robots proceedings of ro-man
Jason Wilson, and Matthias Scheutz (2015) A model of empathy to shape trolley problem moral judgments the sixth international conference on affective and intelligent interaction
Bertram Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano (2015) Sacrifice one for the good or many? People apply different moral norms to human and robot agents proceedings of 10th acm/ieee international conference on human-robot interaction
Recipient of the hri-2015 best paper award in new knowledge!
This project takes up the challenge of developing a computational architecture that integrates the different perceptual and logical systems a moral agent needs to master real-life, ethically charged situations. For autonomous systems in particular, this requires an architecture that can manage multiple and parallel components of detection, discernment, and abstraction when encountering physical objects in the environment, natural language, subtle gestures of face and hands, and expressed goals, values, and reasons on the part of other agents. The project will use the cognitive robotic DIARC architecture as an architectural framework that has shown itself uniquely positioned to provide scalable, adaptive, and robotically compatible means for cognition, action, and communication.
Vasanth Sarathy, and Matthias Scheutz (2019) On resolving ambiguous anaphoric expressions in imperative discourse proceedings of the thirty-third aaai conference on artificial intelligence (aaai-19)
Naveen Sundar Govindarajulu, Selmer Bringsjord, Rikhiya Ghosh, and Vasanth Sarathy (2019) Towards the engineering of virtuous machines proceedings of the 2nd aaai/acm workshop on artificial intelligence, ethics, and society (aies-19)
Tyler Frasca, Bradley Oosterveld, Evan Krause, and Matthias Scheutz (2018) One-shot interaction learning from natural language instruction and demonstration advances in cognitive systems, 6, 159--176
Felix Gervits, Terry Fong, and Matthias Scheutz (2018) Shared mental models to support distributed human-robot teaming in space proceedings of the american institute of aeronautics and astronautics (aiaa) space forum
Felix Gervits, and Matthias Scheutz (2018) Pardon the interruption: managing turn-taking through overlap resolution in embodied artificial agents proceedings of the special interest group on discourse and dialogue (sigdial)
Daniel Kasenberg, and Matthias Scheutz (2018) Inverse norm conflict resolution proceedings of the 1st aaai/acm workshop on artificial intelligence, ethics, and society
Daniel Kasenburg (2018) Inferring and obeying norms in temporal logic proceedings of the human-robot interaction (hri) pioneers workshop
Daniel Kasenberg, Thomas Arnold, and Matthias Scheutz (2018) Norms, rewards, and the intentional stance: comparing machine learning approaches to ethical training proceedings of the 1st aaai/acm workshop on artificial intelligence, ethics, and society
Vasanth Sarathy, Matthias Scheutz, Joseph Austerweil, Yoed Kenett, Mowafak Allaham, and Bertram Malle (2017) Mental representations and computational modeling of context-specific human norm systems proceedings of cognitive science
Kartik Talamadupula, Gordon Briggs, Matthias Scheutz, and Subbarao Kambhampti (2017) Architectural mechanisms for handling human instructions for open-world mixed-initiative team tasks and goals advances in cognitive systems, 5
Matthias Scheutz, Evan Krause, Bradley Oosterveld, Tyler Frasca,
and Robert Platt (2017) Spoken
instruction-based one-shot object and action learning in a
cognitive robotic architecture
proceedings of the 16th international conference on autonomous
agents and multiagent systems
Recipient of the AAMAS 2017 best paper award!
Jason Wilson, Matthias Scheutz, and Gordon Briggs (2016) Reflections on the design challenges prompted by affect-aware socially assistive robots emot. personal. pers. syst.
Vasanth Sarathy, and Matthias Scheutz (2016) Cognitive affordance representations in uncertain logic proceedings of the 15th international conference on principles of knowledge representation and reasoning (kr)
Vasanth Sarathy, Jason Wilson, Thomas Arnold, and Matthias Scheutz (2016) Enabling basic normative HRI in a cognitive robotic architecture proceedings of the 2nd workshop on cognitive architectures for social human-robot interaction at the 11th acm/ieee conference on human-robot interaction
Gordon Briggs, and Matthias Scheutz (2014) Modeling blame to avoid positive face threats in natural language generation proceedings to the eight international natural language generation conference
Throughout its work on moral competence, this project will also make theoretical and empirical advances on the specific challenges and conditions that human-robot interactions pose. This means gaining a more confident understanding of how humans react to robots, in what ways and for what reasons they deny or grant moral agency to robots, and where robots may even exceed ordinary moral competence. Research into those challenges and conditions will guide the development of computational architecture as well as the physical appearance and mobility of robots. The project will culminate in rigorous, demanding demonstrations of robots succeeding at reasoning, reacting, and acting ethically amid shifting conditions, conflicting goals, and competing values.
Vasanth Sarathy, Thomas Arnold, and MAtthias Scheutz (2019) When exceptions are the norm: Exploring the role of consent in HRI acm trans. hum.-robot interact.,9,2
Thomas Arnold, and Matthias Scheutz (2018) Observing robot touch in context: How does touch and attitude affect perception of a robot's social qualities? acm/ieee international conference on human-robot interaction
Lisa Fan, Matthias Scheutz, Monika Lohani, Marrisa Mccoy, and Charlene Stokes (2017) Do we need emotionally intelligent artificial agents? First results of human perceptions of emotional intelligence in humans compared to robots proceedings of the 2017 8th ieee international conference on cognitive infocommunications (coginfocom)
Thomas Arnold, and Matthias Scheutz (2017) The tactile ethics of soft robotics: Designing wisely for human--robot interaction soft robotics, 4, 2, 81--87
Thomas Arnold, and Matthias Scheutz (2017) Beyond moral dilemmas; exploring the ethical landscape in HRI proceedings of the 2017 acm/ieee international conference on human-robot interaction, 445--452
Bertram Malle, Matthias Scheutz, Jodi Forlizzi, and John Voiklis (2016) Which robot am i thinking about? The impact of action and appearance on people's evaluations of a moral robot proceedings of 11th acm/ieee international conference on human-robot interaction
Gordon Briggs, and Matthias Scheutz (2014) How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress international journal of social robotics, 6, 1--13
Gordon Briggs, Bryce Gessell, Matthew Dunlap, and Matthias Scheutz (2014) Actions speak louder than looks: Does robot appearance affect human reactions to robot protest and distress? proceedings of 23rd ieee symposium on robot and human interactive communication (ro-man)
Megan Strait, and Matthias Scheutz (2014) Using functional near infrared spectroscopy to measure moral decision-making: Effects of agency, emotional value, and monetary incentive brain-computer interfaces, 1--10, 1
Paul Schermerhorn, and Matthias Scheutz (2011) Disentangling the effects of robot affect, embodiment, and autonomy on human team members in a mixed-initiative task proceedings of the 2011 international conference on advances in computer-human interactions, 236--241
Questions about what actions robots will and should have the leeway, direction, or authority to carry out have elicited fears, fantasies, and a wide range of future projections. To contribute usefully to these larger public discussions, MURI investigators have sought to discuss critical questions of social responsibility while grounded in real-world research, policy, design, and engineering.
For MURI-related articles and other pieces see the Press tab
For inquiries please write to matthias.scheutz@tufts.edu
See more robot videos on the Video page
Professor Bertram Malle
Professor Selmer Bringsjord