The most intriguing and ethically challenging roles of robots in society are those of collaborator and social partner. We propose that such robots must have the capacity to learn, represent, activate, and apply social and moral norms—they must have a norm capacity. We offer a theoretical analysis of two parallel questions: what constitutes this norm capacity in humans and how might we implement it in robots? We propose that the human norm system has four properties: flexible learning despite a general logical format, structured representations, context-sensitive activation, and continuous updating. We explore two possible models that describe how norms are cognitively represented and activated in context-specific ways and draw implications for robotic architectures that would implement either model.
@incollection{malleetal15icre, title={Networks of Social and Moral Norms in Human and Robot Agents}, author={Bertram Malle and Matthias Scheutz and Joe Austerweil}, year={2015}, booktitle={Proceedings of International Conference on Robot Ethics}, url={https://hrilab.tufts.edu/publications/malleetal15icre.pdf} doi={10.1007/978-3-319-46667-5_1} }