Blame is an vital social and cognitive mechanism that humans utilize in their interactions with other agents. In this paper, we discuss how blame-reasoning mechanisms are needed to enable future social robots to: (1) appropriately adapt behavior in the context of repeated and/or long-term interactions and relationships with other social agents; (2) avoid behaviors that are perceived to be rude due to inappropriate and unintentional connotations of blame; and (3) avoid behaviors that could damage long-term, working relationships with other social agents. We also discuss how current computational models of blame and other relevant competencies (e.g. natural language generation) are currently insufficient to address these concerns. Future work is necessary to increase the social reasoning capabilities of artificially intelligent agents to achieve these goals.
@inproceedings{briggs14blame, title={Blame, What is it Good For?}, author={Gordon Briggs}, year={2014}, booktitle={Proceedings of the Workshop on Philosophical Perspectives on HRI at Ro-Man 2014}, url={https://hrilab.tufts.edu/publications/briggs14blame.pdf} }