Morality is a fundamentally human trait that permeates all levels of human society, from basic etiquette and normative expectations of social groups, to formalized legal principles upheld by societies. Hence, future interactive AI systems, in particular, cognitive systems on robots deployed in human settings, will have to meet human normative expectations, for otherwise these system risk causing harm. While the interest in machine ethics has increased rapidly in recent years, there are only very few current efforts in the cognitive systems community to investigate moral and ethical reasoning. And there is currently no cognitive architecture that has even rudimentary moral or ethical competence, that is, the ability to judge situations based on moral principles such as norms and values and make morally and ethically sound decisions. We hence argue for the urgent need to instill moral and ethical competence in all cognitive system intended to be employed in human social contexts.
@article{sheutz17aim, title={The Case for Explicit Ethical Agents}, author={Sheutz, Matthias}, year={2017}, journal={AI Magazine}, volume={38}, issue={38}, pages={57--64} url={https://hrilab.tufts.edu/publications/sheutz17aim.pdf} doi={10.1609/aimag.v38i4.2746} }