Autonomous social robots embedded in human societies have to be sensitive to human social interactions and thus to moral norms and principles guiding these interactions. Actions that violate norms can lead to the violator being blamed. Robots thus need to be able to anticipate possible norm violations and attempt to prevent them while they execute actions. If norm violations cannot be prevented (e.g., in a moral dilemma situation in which every action leads to a norm violation), then the robot needs to be able to justify the action to address any potential blame. In this paper, we present a first attempt at an action execution system for social robots that can (a) detect (some) norm violations, (b) consult an ethical reasoner for guidance on what to do in moral dilemma situations, and (c) it can keep track of execution traces and any resulting states that might have violated norms in order to produce justifications.
@incollection{scheutzetal15roman, title={Towards Morally Sensitive Action Selection for Autonomous Social Robots}, author={Matthias Scheutz and Bertram Malle and Gordon Briggs}, year={2015}, booktitle={Proceedings of Ro-Man}, url={https://hrilab.tufts.edu/publications/scheutzetal15roman.pdf} doi={10.1109/ROMAN.2015.7333661} }