Our Mission
We develop cutting-edge AI algorithms for natural language understanding, instruction-based learning, cognitive and affective modeling for decision-making, reasoning and planning, and resilient ethical task execution in open worlds within our cognitive robotic DIARC architecture to coordinate a robot's behavior and communication with people in real-time. Our experiments in human-robot interaction help to show people's reactions to robots in practical settings, anticipating how robots will fare in different task-based interaction contexts.
Examples of our research
Imitation learning for long-horizon tasks is essential in enabling intelligent systems to efficiently acquire and general- ize complex behaviors with minimal reliance on supervision and hand-coded solutions. However, existing approaches often focus on short, isolated skills, require large amounts of demonstrations, and struggle to generalize to new tasks or shifts in the data distribution. We propose a novel neuro-symbolic framework that combines continuous control learning with symbolic domain ab- straction, requiring only a few skill demonstrations to effectively solve long-horizon tasks. Our results demonstrate high data efficiency, robust zero and few-shot generalization, and interpretable decision- making, paving the way for scalable, human-taught robotics.
Open-world robotic tasks such as autonomous driving pose significant challenges to robot control due to unknown and unpredictable events that disrupt task performance. Neural network-based reinforcement learning (RL) techniques (like DQN, PPO, SAC, etc.) struggle to adapt in large domains and suffer from catastrophic forgetting. Hybrid planning and RL approaches have shown some promise in handling environmental changes but lack efficiency in accommodation speed. To address this limitation, we propose an enhanced hybrid system with a nested hierarchical action abstraction that can utilize previously acquired skills to effectively tackle unexpected novelties. We show that it can adapt faster and generalize better compared to state-of-the-art RL and hybrid approaches, significantly improving robustness when multiple environmental changes occur at the same time.
Domain generalization aims to learn a model with good generalization ability, that is, the learned model should not only perform well on several seen domains but also on unseen domains with different data distributions. State-of-the-art domain generalization methods typically train a representation function followed by a classifier jointly to minimize both the classification risk and the domain discrepancy. However, when it comes to model selection, most of these methods rely on traditional validation routines that select models solely based on the lowest classification risk on the validation set. In this paper, we theoretically demonstrate a trade-off between minimizing classification risk and mitigating domain discrepancy, i.e., it is impossible to achieve the minimum of these two objectives simultaneously. Motivated by this theoretical result, we propose a novel model selection method suggesting that the validation process should account for both the classification risk and the domain discrepancy. We validate the effectiveness of the proposed method by numerical results on several domain generalization datasets.