Human-Robot Interaction Lab

Robots will be part of every aspect of human societies in the future.

Human-Robot Interaction Lab

Robots will be part of every aspect of human societies in the future.

Human-Robot Interaction Lab

Robots will be part of every aspect of human societies in the future. We seek to understand how they can best meet the various cognitive, affective, and physical demands that task-based interactions require.

The Future of Human Robot Interaction




Our Mission

We develop cutting-edge AI algorithms for natural language understanding, instruction-based learning, cognitive and affective modeling for decision-making, reasoning and planning, and resilient task execution in open worlds within our cognitive robotic DIARC architecture to coordinate a robot's behavior and communication with people in real-time. Our experiments in human-robot interaction help to show people's reactions to robots in practical settings, anticipating how robots will fare in different social interaction contexts.

Examples of our research

Domain generalization aims to learn a model with good generalization ability, that is, the learned model should not only perform well on several seen domains but also on unseen domains with different data distributions. State-of-the-art domain generalization methods typically train a representation function followed by a classifier jointly to minimize both the classification risk and the domain discrepancy. However, when it comes to model selection, most of these methods rely on traditional validation routines that select models solely based on the lowest classification risk on the validation set. In this paper, we theoretically demonstrate a trade-off between minimizing classification risk and mitigating domain discrepancy, i.e., it is impossible to achieve the minimum of these two objectives simultaneously. Motivated by this theoretical result, we propose a novel model selection method suggesting that the validation process should account for both the classification risk and the domain discrepancy. We validate the effectiveness of the proposed method by numerical results on several domain generalization datasets.

Details

We propose a self-assessment framework which enables a robot to estimate how well it will perform a known or novel task. The robot simulates the task by sampling performance distributions to generate a state distribution of possible outcomes and determines 1) the likelihood of success, 2) the most probable failure location, and 3) the expected time to task completion.

Details

We examined how typical trust questionnaires used in HRI were affected when participants had the option to choose "not applicable to this robot" or "not applicable to robots in general" for any given question. We found that participants do make use of these choices, particularly for questionnaires that get at social dimensions of trust. People's mental models of robots are fragile, and what was considered NA to robots in general was context-dependent. Additionally, we found no statistical difference in average score for trust questionnaires where participants had the NA options versus those that were forced to rate with a normal Likert scale. However, this does not necessarily mean that participants all interpreted the questions in the same way.

Details


Latest Videos