Even though morally competent artificial agents have yet to emerge in society, we need insights from empirical science into how people will respond to such agents and how these responses should inform agent design. Three survey studies presented participants with an artificial intelligence (AI) agent, an autonomous drone, or a human drone pilot facing a moral dilemma in a military context: to either launch a missile strike on a terrorist compound but risk the life of a child, or to cancel the strike to protect the child but risk a terrorist attack. Seventy-two percent of respondents were comfortable making moral judgments about the AI in this scenario and fifty-one percent were comfortable making moral judgments about the autonomous drone. These participants applied the same norms to the two artificial agents and the human drone pilot (more than 80% said that the agent should launch the missile). However, people ascribed different patterns of blame to humans and machines as a function of the agent’s decision of how to solve the dilemma. These differences in blame seem to stem from different assumptions about the agents’ embeddedness in social structures and the moral justifications those structures afford. Specifically, people less readily see artificial agents as embedded in social structures and, as a result, they explained and justified their actions differently. As artificial agents will (and already do) perform many actions with moral significance, we must heed such differences in justifications and blame and probe how they affect our interactions with those agents.
@inproceedings{malleetal17icress, title={AI in the Sky: How People Morally Evaluate Human and Machine Decisions in a Lethal Strike Dilemma}, author={Bertram Malle and Stuti Thapa Magar and Matthias Scheutz}, year={2017}, booktitle={International Conference on Robot Ethics and Safety Standards}, url={https://hrilab.tufts.edu/publications/malleetal17icress.pdf} doi={10.1007/978-3-030-12524-0_11} }