Transparent task-based communication between human instructors and robot instructees requires robots to be able to determine whether a human instruction can and should be carried out, i.e., whether the human is authorized, and whether the robot can and should do it. If the instruction is not appropriate, the robot needs to be able to reject it in a transparent manner by including its reasons for the rejection. In this article, we provide a brief overview of our work on natural language under- standing and transparent communication in the Distributed Integrated Affect Reflection Cognition (DIARC) architecture and demonstrate how the robot can perform different inferences based on con- text to determine whether it should reject a human instruction. Specifically, we discuss four task-based dialogues and show videos of the interactions with fully autonomous robots that are able to reject human commands and provide succinct explanations and justifications for their rejection. The pro- posed approach can form the basis of further algorithmic developments for adapting the robot’s level of transparency for different interlocutors and contexts.
@article{scheutzetal2022ijhci, title={Transparency through Explanations and Justifications in Human-Robot Task-Based Communications}, author={Scheutz, Matthias and Thielstrom, Ravenna and Abrams, Mitchell}, year={2022}, journal={International Journal of Human--Computer Interaction}, publisher={Taylor \& Francis}, volume={38}, pages={1739--1752} url={https://hrilab.tufts.edu/publications/scheutzetal2022ijhci.pdf} }