We describe the initial development stage of a cognitive robotic architecture that can assist the communication and detection of emotions in interactions where some modalities are totally or partially compromised. We hypothesize that the distribution of topics extracted from each sentence, that is part of a collection of written text documents, using the Latent Dirichlet Allocation (LDA) generative model can be associated with measures of emotional valence and arousal. We integrated our model into the cognitive robotic architecture, DIARC, and demonstrated how a robot can use speech transcriptions to detect positive or negative emotion valence and express it through its facial features.
@inproceedings{valenti19aamasws, title={When your face and tone of voice don't say it all: Inferring emotional state from word semantics and conversational topics}, author={Andrew P. Valenti and Meia Chita-Tegmark and Theresa Law and Alexander W. Bock and Bradley Oosterveld and Matthias Scheutz}, year={2019}, month={May}, booktitle={Workshop on Cognitive Architectures for HRI: Embodied Models of Situated Natural Language Interactions at AAMAS 2019}, url={https://hrilab.tufts.edu/publications/valenti19aamasws.pdf} }