Multitasking has become an integral part of work environments, even though people are not well-equipped cognitively to handle numerous concurrent tasks effectively. Systems that support such multitasking may produce better performance and less frustration. However, without understanding the user's internal processes, it is difficult to determine optimal strategies for adapting interfaces, since all multitasking activity is not identical. We describe two experiments leading toward a system that detects cognitive multitasking processes and uses this information as input to an adaptive interface. Using functional near-infrared spectroscopy sensors, we differentiate four cognitive multitasking processes. These states cannot readily be distinguished using behavioral measures such as response time, accuracy, keystrokes or screen contents. We then present our human-robot system as a proof-of-concept that uses real-time cognitive state information as input and adapts in response. This prototype system serves as a platform to study interfaces that enable better task switching, interruption management, and multitasking.
@inproceedings{soloveyetal11chi, title={Sensing Cognitive Multitasking for a Brain-Based Adaptive User Interface}, author={Erin Treacy Solovey and Francine Lalooses and Krysta Chauncey and Douglas Weaver and Margarita Parasi and Matthias Scheutz and Angelo Sassaroli and Sergio Fantini and Paul Schermerhorn and Audrey Girouard and Robert J.K. Jacob}, year={2011}, month={May}, booktitle={Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems}, url={https://hrilab.tufts.edu/publications/soloveyetal11chi.pdf} }