An important open problem for enabling truly taskable robots is the lack of task-general natural language mechanisms within cognitive robot architectures that enable robots to understand typical forms of human directives and generate appropriate responses. In this paper, we first provide experimental evidence that humans tend to phrase their directives to robots indirectly, especially in socially conventionalized contexts.
@article{briggssetal17jhri, title={Enabling Robots to Understand Indirect Speech Acts in Task-Based Interactions}, author={Gordon Briggs and Tom Williams and Matthias Scheutz}, year={2017}, journal={Journal of Human-Robot Interaction}, volume={6}, pages={64--94} url={https://hrilab.tufts.edu/publications/briggssetal17jhri.pdf} doi={10.5898/JHRI.6.1.Brigg} }