Machine ethics has sought to establish how autonomous systems could make ethically appropriate decisions in the world. While mere statistical machine learning approaches have focused on learning human preferences from observations and attempted actions, hybrid approaches to machine ethics attempt to provide more explicit guidance for robots based on explicit norm representations. Neither approach, however, might be sufficient for real contexts of human-robot interaction, where reasoning and exchange of information may need to be distributed across automated processes and human improvisation, requiring real-time coordination within a dynamic environment (sharing information, trusting in other agents, and arriving at revised plans together). This paper builds on discussions of “extended minds” in philosophy to examine norms as “extended” systems supported by external cues and an agent’s own applications of norms in concrete contexts. Instead of locating norms solely as discrete representations within the AI system, we argue that explicit normative guidance must be extended across human-machine collaborative activity as what does and does not constitute a normative context, and within a norm, might require negotiation of incompletely specified or derive principles that not be self-contained, but become accessible as a result of the agent’s actions and interactions and thus representable by agents in social space.
@article{arnoldscheutz22gio, title={Extended Norms: Locating Accountable Decision-making In Contexts of Human-Robot Interaction}, author={Arnold, Thomas and Scheutz, Matthias}, year={2022}, journal={Gruppe. Interaktion. Organisation. Zeitschrift für Angewandte Organisationspsychologie}, volume={53}, pages={359--366} url={https://hrilab.tufts.edu/publications/arnoldscheutz22gio.pdf} doi={10.1007/s11612-022-00645-6} }