In the future, artificial agents are likely to make life-and-death decisions about humans. Ordinary people are the likely arbiters of whether these decisions are morally acceptable. We summarize research on how ordinary people evaluate artificial (compared to human) agents that make life-and-death decisions. The results suggest that many people are inclined to morally evaluate artificial agents’ decisions, and when asked how the artificial and human agents should decide, they impose the same norms on them. However, when confronted with how the agents did in fact decide, people judge the artificial agents’ decisions differently from those of humans. This difference is best explained by justifications people grant the human agents (imagining their experience of the decision situation) but do not grant the artificial agent (whose experience they cannot imagine). If people fail to infer the decision processes and justifications of artificial agents, these agents will have to explicitly communicate such justifications to people, so they can understand and accept their decisions.
@incollection{mallescheutz21law, title={May Machines Take Lives to Save Lives? Human Perceptions of Autonomous Robots (with the Capacity to Kill)}, author={Malle, Bertram and Scheutz, Matthias}, year={2021}, booktitle={Lethal Autonomous Weapons: Re-Examining the Law & Ethics of Robotic Warfare}, publisher={Oxford University Press}, pages={89--102} url={https://hrilab.tufts.edu/publications/mallescheutz21law.pdf} doi={10.1093/oso/9780197546048.003.0007} }