Interactive Robot Learning for Multimodal Emotion Recognition

被引:24
|
作者
Yu, Chuang [1 ]
Tapus, Adriana [1 ]
机构
[1] ENSTA Paris Inst Polytech Paris, Autonomous Syst & Robot Lab, U2IS, 828 Blvd Marechaux, F-91120 Palaiseau, France
来源
SOCIAL ROBOTICS, ICSR 2019 | 2019年 / 11876卷
关键词
Interactive robot learning; Multimodal emotion recognition; Human-robot interaction;
D O I
10.1007/978-3-030-35888-4_59
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Interaction plays a critical role in skills learning for natural communication. In human-robot interaction (HRI), robots can get feedback during the interaction to improve their social abilities. In this context, we propose an interactive robot learning framework using multimodal data from thermal facial images and human gait data for online emotion recognition. We also propose a new decision-level fusion method for the multimodal classification using Random Forest (RF) model. Our hybrid online emotion recognition model focuses on the detection of four human emotions (i.e., neutral, happiness, angry, and sadness). After conducting offline training and testing with the hybrid model, the accuracy of the online emotion recognition system is more than 10% lower than the offline one. In order to improve our system, the human verbal feedback is injected into the robot interactive learning. With the new online emotion recognition system, a 12.5% accuracy increase compared with the online system without interactive robot learning is obtained.
引用
收藏
页码:633 / 642
页数:10
相关论文
共 50 条
  • [21] A systematic survey on multimodal emotion recognition using learning algorithms
    Ahmed, Naveed
    Al Aghbari, Zaher
    Girija, Shini
    [J]. INTELLIGENT SYSTEMS WITH APPLICATIONS, 2023, 17
  • [22] CROSS-CULTURE MULTIMODAL EMOTION RECOGNITION WITH ADVERSARIAL LEARNING
    Liang, Jingjun
    Chen, Shizhe
    Zhao, Jinming
    Jin, Qin
    Liu, Haibo
    Lu, Li
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 4000 - 4004
  • [23] Multimodal Emotion Recognition on RAVDESS Dataset Using Transfer Learning
    Luna-Jimenez, Cristina
    Griol, David
    Callejas, Zoraida
    Kleinlein, Ricardo
    Montero, Juan M.
    Fernandez-Martinez, Fernando
    [J]. SENSORS, 2021, 21 (22)
  • [24] EmoNets: Multimodal deep learning approaches for emotion recognition in video
    Kahou, Samira Ebrahimi
    Bouthillier, Xavier
    Lamblin, Pascal
    Gulcehre, Caglar
    Michalski, Vincent
    Konda, Kishore
    Jean, Sebastien
    Froumenty, Pierre
    Dauphin, Yann
    Boulanger-Lewandowski, Nicolas
    Ferrari, Raul Chandias
    Mirza, Mehdi
    Warde-Farley, David
    Courville, Aaron
    Vincent, Pascal
    Memisevic, Roland
    Pal, Christopher
    Bengio, Yoshua
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2016, 10 (02) : 99 - 111
  • [25] EmoNets: Multimodal deep learning approaches for emotion recognition in video
    Samira Ebrahimi Kahou
    Xavier Bouthillier
    Pascal Lamblin
    Caglar Gulcehre
    Vincent Michalski
    Kishore Konda
    Sébastien Jean
    Pierre Froumenty
    Yann Dauphin
    Nicolas Boulanger-Lewandowski
    Raul Chandias Ferrari
    Mehdi Mirza
    David Warde-Farley
    Aaron Courville
    Pascal Vincent
    Roland Memisevic
    Christopher Pal
    Yoshua Bengio
    [J]. Journal on Multimodal User Interfaces, 2016, 10 : 99 - 111
  • [26] Graph to Grid: Learning Deep Representations for Multimodal Emotion Recognition
    Jin, Ming
    Li, Jinpeng
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5985 - 5993
  • [27] Multimodal interaction enhanced representation learning for video emotion recognition
    Xia, Xiaohan
    Zhao, Yong
    Jiang, Dongmei
    [J]. FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [28] ENHANCED SEMI-SUPERVISED LEARNING FOR MULTIMODAL EMOTION RECOGNITION
    Zhang, Zixing
    Ringeval, Fabien
    Dong, Bin
    Coutinho, Eduardo
    Marchi, Erik
    Schuller, Bjoern
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 5185 - 5189
  • [29] Learning Mutual Correlation in Multimodal Transformer for Speech Emotion Recognition
    Wang, Yuhua
    Shen, Guang
    Xu, Yuezhu
    Li, Jiahang
    Zhao, Zhengdao
    [J]. INTERSPEECH 2021, 2021, : 4518 - 4522
  • [30] MALN: Multimodal Adversarial Learning Network for Conversational Emotion Recognition
    Ren, Minjie
    Huang, Xiangdong
    Liu, Jing
    Liu, Ming
    Li, Xuanya
    Liu, An-An
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (11) : 6965 - 6980