Open-World Novelty Detection, Characterization, and Accommodation

Open-world AI requires artificial agents to be able to handle novelties that arise during task performance, i.e., agents must detect novelties and characterize them in order to be able to accommodate them effectively, especially in cases where sudden changes to the environment make task accomplishment impossible without utilizing the novelty. In this project, we are developing algorithms for all aspects of a formal framework and implementation thereof in a cognitive agent for novelty handling and demonstrate the efficacy of the proposed methods in various open-world tasks, including a crafting task in Minecraft.

RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments (2022)

Shivam Goel and Yash Shukla and Vasanth Sarathy and Matthias Scheutz and Jivko Sinapov

We propose an integrated planning and learning approach that utilizes learning from failures and transferring knowledge over time to overcome novelty scenarios. The approach is more sample efficient in adapting to sudden and unknown changes (i.e., novelties) than the existing hybrid approaches. We showcase our results on a…

Keywords: reinforcement learning, planning, open-world AI, novelty accommodation

Speeding-up Continual Learning through Information Gains in Novel Experiences (2022)

Pierrick Lorang and Shivam Goel and Patrik Zips and Jivko Sinapov and Matthias Scheutz

We propose an integrated planning and learning approach that utilizes learning from failures and transferring knowledge over time to overcome novelty scenarios. The approach is more sample efficient in adapting to sudden and unknown changes (i.e., novelties) than the existing hybrid approaches. We showcase our results on a…

Keywords: reinforcement learning, planning, open-world AI, novelty accommodation

ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments (2022)

Yash Shukla and Christopher Thierauf and Ramtin Hosseini and Gyan Tatiya and Jivko Sinapov

We formulate the curriculum transfer problem, in which the schema of a curriculum optimized in a simpler, easy-to-solve environment (e.g., a grid world) is transferred to a complex, realistic scenario (e.g., a physics-based robotics simulation or the real world). We present "ACuTE", Automatic Curriculum Transfer from Simple to Complex…

Keywords: reinforcement learning, curriculum learning

BIPLEX: Creative Problem-Solving by Planning for Experimentation (2022)

Vasanth Sarathy and Matthias Scheutz

We propose a novel problem solving algorithm called BIPLEX that involves hypothesis generation, experimentation, and outcome observation as part of the search for solutions. We introduce BIPLEX through various examples in a baking domain that demonstrate important features of the framework, including its representation of objects in…

Keywords: problem solving, open-world AI, planning, creativity

Analysis of the Future Potential of Autoencoders in Industrial Defect Detection (2022)

Sarah Schneider and Doris Antensteiner and Daniel Soukup and Matthias Scheutz

We investigated the anomaly detection behaviour of three convolutional autoencoder types - a "standard” convolutional autoencoder (CAE), a variational convolutional autoencoder (VAE) and an adversarial convolutional autoencoder (AAE) - by applying them to different visual anomaly detection scenarios. First, we utilized our three…

Keywords: vision, anomaly detection, defect detection

Autoencoders - A Comparative Analysis in the Realm of Anomaly Detection (2022)

Sarah Schneider and Doris Antensteiner and Daniel Soukup and Matthias Scheutz

We applied convolutional versions of a "standard" autoencoder (CAE), a variational autoencoder (VAE) and an adversarial autoencoder (AAE) to two different publicly available datasets and compared their anomaly detection performances. We used the MNIST dataset as a simple anomaly detection scenario. The CIFAR10 dataset was used to examine…

Keywords: vision, anomaly detection

An Architecture for Novelty Handling in a Multi-Agent Stochastic Environment: Case Study in Open-World Monopoly. (2022)

Thai, Tung and Shen, Ming and Varshney, Neeraj and Gopalakrishnan, Sriram and Soni, Utkarsh and Baral, Chitta and Sinapov, Jivko and Scheutz, Matthias

The ability of AI agents and architectures to detect and adapt to sudden changes in their environments remains an outstand- ing challenge. In the context of multi-agent games, the agent may face novel situations where the rules of the game, the available actions, the environment dynamics, the behavior of other agents, as well as the…

NovelGridworlds: A Benchmark Environment for Detecting and Adapting to Novelties in Open Worlds (2021)

Goel, Shivam and Tatiya, Gyan and Scheutz, Matthias and Sinapov, Jivko

As researchers are developing methods for detecting and accommo- dating novelties that will make AI agents more robust to unknown sudden changes in the “open worlds”, there is an increasing need for benchmark environments that allow for the systematic evalua- tions of the proposed AI techniques. We present “NovelGridworlds”, an OpenAI…

A Novelty-Centric Agent Architecture for Changing Worlds (2021)

Faizan Muhammad and Vasanth Sarathy and Gyan Tatiya and Shivam Goel and Saurav Gyawali and Mateo Guaman and Jivko Sinapov and Matthias Scheutz

Open-world AI requires artificial agents to cope with novelties that arise during task performance, i.e., they must (1) detect novelties, (2) characterize them, in order to (3) accommodate them, especially in cases where sudden changes to the environment make task accomplishment impossible without utilizing the novelty. We present a…

SPOTTER: Extending Symbolic Planning Operators through Targeted Reinforcement Learning (2021)

Vasanth Sarathy and Daniel Kasenberg and Shivam Goel and Jivko Sinapov and Matthias Scheutz

We developed a new integrated planning and reinforcement learning framework called SPOTTER to enable agents to solve problems unsolvable from their initial planning domain.