This chapter starts by looking at the empirical evidence for human expectations about moral robots and considers ways to implement them in robotic control systems. It considers three main options, all with their own advantages and disadvantages: implement ethical theories as proposed by philosophers; implement legal principles as proposed by legal scholars; and implement human-like moral competence as proposed by psychologists. The main problem associated with implementing philosophical ethical theories is that there is still no consensus among philosophers about which approach is the normatively correct one. Moral robots are necessary to ensure effective and safe human–robot interactions. Research and development of mechanisms for ensuring normative behavior in autonomous robots has just begun, but it is poised to expand, judging from the increasing number of workshops and special sessions devoted to robot ethics and related topics. Researchers working in the field of human–robot interaction has investigated human reactions to robots violating norms of varying severity.
@incollection{scheutzmalle17ne, title={Moral Robots}, author={Scheutz, Matthias and Malle, Bertram}, year={2017}, booktitle={The Routledge Handbook of Neuroethics}, publisher={Routledge/Taylor & Francis Group}, pages={363--377} url={https://hrilab.tufts.edu/publications/scheutzmalle17ne.pdf} doi={10.4324/9781315708652-27} }