Open-world robotic tasks such as autonomous driving pose significant challenges to robot control due to unknown and unpredictable events that disrupt task performance. Neural network-based reinforcement learning (RL) techniques (like DQN, PPO, SAC, etc.) struggle to adapt in large domains and suffer from catastrophic forgetting. Hybrid planning and RL approaches have shown some promise in handling environmental changes but lack efficiency in accommodation speed. To address this limitation, we propose an enhanced hybrid system with a nested hierarchical action abstraction that can utilize previously acquired skills to effectively tackle unexpected novelties. We show that it can adapt faster and generalize better compared to state-of-the-art RL and hybrid approaches, significantly improving robustness when multiple environmental changes occur at the same time.
@inproceedings{lorangetal24icra, title={Adapting to the “Open World”: The Utility of Hybrid Hierarchical Reinforcement Learning and Symbolic Planning}, author={Pierrick Lorang and Helmut Horvath and Tobias Kietreiber and Patrik Zips and Clemens Heitzinger and Matthias Scheutz}, year={2024}, booktitle={Proceedings of ICRA}, url={https://hrilab.tufts.edu/publications/lorangetal24icra.pdf} }