A Framework for Neurosymbolic Goal-Conditioned Continual Learning in Open World Environments

2024

Conference: Proceedings of the International Conference on Intelligent Robots and Systems
Publisher: IEEE

Pierrick Lorang and Shivam Goel and Yash Shukla and Patrik Zips and Matthias Scheutz

In dynamic open-world environments, agents continually face new challenges due to sudden and unpredictable novelties, hindering Task and Motion Planning (TAMP) in autonomous systems. We introduce a novel TAMP architecture that integrates symbolic planning with reinforcement learning to enable autonomous adaptation in such environments, operating without human guidance. Our approach employs symbolic goal representation within a goal-oriented learning framework, coupled with planner-guided goal identification, effectively managing abrupt changes where traditional reinforcement learning, re-planning, and hybrid methods fall short. Through sequential novelty injections in our experiments, we assess our method’s adaptability to continual learning scenarios. Extensive simulations conducted in a robotics domain corroborate the superiority of our approach, demonstrating faster convergence to higher performance compared to traditional methods. The success of our framework in navigating diverse novelty scenarios within a continuous domain underscores its potential for critical real-world applications.

@inproceedings{lorangetal2024iros,
  title={A Framework for Neurosymbolic Goal-Conditioned Continual Learning in Open World Environments},
  author={Pierrick Lorang and Shivam Goel and Yash Shukla and Patrik Zips and Matthias Scheutz},
  year={2024},
  booktitle={Proceedings of the International Conference on Intelligent Robots and Systems},
  publisher={IEEE},
  url={https://hrilab.tufts.edu/publications/lorangetal2024iros.pdf}
}