We present an annotation scheme that captures the structure and content of task intentions in situated dialogue where humans instruct robots to perform novel action sequences and sub-sequences. This representation identifies patterns and structural differences between human-human and human-robot communications. We find that humans engage in more dialogue about updating beliefs with other humans, while they are significantly more direct in their intentions with robots, incrementally instructing physical actions. Additionally, humans talk significantly less about plans with robots compared to other humans.
@inproceedings{marge2020s, title={Let's do that first! A Comparative Analysis of Instruction-Giving in Human-Human and Human-Robot Situated Dialogue}, author={Matthew Marge and Felix Gervits and Gordon Briggs and Matthias Scheutz and Antonio Roque}, year={2020}, booktitle={Proceedings of the 24th Workshop on the Semantics and Pragmatics of Dialogue (SemDial)}, url={https://hrilab.tufts.edu/publications/marge2020s.pdf} }