Robots are increasingly embedded in human societies where they encounter human collaborators, potential adversaries, and even uninvolved by-standers. Such robots must plan to accomplish joint goals with teammates while avoiding interference from competitors, possibly utilizing bystanders to advance the robot’s goals. We propose a planning framework for robot task and action planners that can cope with collaborative, competitive, and non-involved human agents at the same time by using mental models of human agents. By querying these models, the robot can plan for the effects of future human actions and can plan robot actions to influence what the human will do, even when influencing them through explicit communication is not possible. We implement the framework in a planner that does not assume that human agents share goals with, or will cooperate with, the robot. Instead, it can handle the diverse relations that can emerge from interactions between the robot’s goals and capacities, the task environment, and the human behavior predicted by the planner’s models. We report results from an evaluation where a teleoperated robot executes a planner-generated policy to influence the behavior of human participants. Since the robot is not capable of performing some of the actions necessary to achieve its goal, the robot instead tries to cause the human to perform those actions.
@inproceedings{buckinghametal20icsr, title={Robot Planning with Mental Models of Co-Present Humans}, author={Buckingham, David and Chita-Tegmark, Meia and Scheutz, Matthias}, year={2020}, booktitle={Social Robotics. ICSR 2020}, volume={12483}, pages={566--577} url={https://hrilab.tufts.edu/publications/buckinghametal20icsr.pdf} doi={10.1007/978-3-030-62056-1_47} }