Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human and Robot Agents

2015

Collection: Proceedings of 10th ACM/IEEE International Conference on Human-Robot Interaction

Bertram F. Malle and Matthias Scheutz and Thomas H. Arnold and John T. Voiklis and Corey Cusimano

Moral norms play an essential role in regulating human interaction. With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior. We report the first comparison of people's moral judgments (of permissibility, wrongness, and blame) about human and robot agents. Two online experiments (total N = 316) found that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a "utilitarian" choice), and they were blamed more than their human counterparts when they did not make that choice. Though the utilitarian sacrifice was generally seen as permissible for human agents, they were blamed more for choosing this option than for doing nothing. These results provide a first step toward a new field of Moral HRI, which is well placed to help guide the design of social robots.

@incollection{malleetal15hri,
  title={Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human and Robot Agents},
  author={Bertram F. Malle and Matthias Scheutz and Thomas H. Arnold and John T. Voiklis and Corey Cusimano},
  year={2015},
  booktitle={Proceedings of 10th ACM/IEEE International Conference on Human-Robot Interaction},
  url={https://hrilab.tufts.edu/publications/malleetal15hri.pdf}
}