Multimodal Emotion Recognition in Response to Videos

被引:457
|
作者
Soleymani, Mohammad [1 ]
Pantic, Maja [2 ,3 ]
Pun, Thierry [1 ]
机构
[1] Univ Geneva, Dept Comp Sci, Comp Vis & Multimedia Lab, CH-1227 Carouge, GE, Switzerland
[2] Univ London Imperial Coll Sci Technol & Med, Dept Comp, London SW7 2AZ, England
[3] Univ Twente, Fac Elect Engn Math & Comp Sci, NL-7522 NB Enschede, Netherlands
基金
欧洲研究理事会; 瑞士国家科学基金会;
关键词
Emotion recognition; EEG; pupillary reflex; pattern classification; affective computing; PUPIL LIGHT REFLEX; CLASSIFICATION; OSCILLATIONS; SYSTEMS;
D O I
10.1109/T-AFFC.2011.37
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a user-independent emotion recognition method with the goal of recovering affective tags for videos using electroencephalogram (EEG), pupillary response and gaze distance. We first selected 20 video clips with extrinsic emotional content from movies and online resources. Then, EEG responses and eye gaze data were recorded from 24 participants while watching emotional video clips. Ground truth was defined based on the median arousal and valence scores given to clips in a preliminary study using an online questionnaire. Based on the participants' responses, three classes for each dimension were defined. The arousal classes were calm, medium aroused, and activated and the valence classes were unpleasant, neutral, and pleasant. One of the three affective labels of either valence or arousal was determined by classification of bodily responses. A one-participant-out cross validation was employed to investigate the classification performance in a user-independent approach. The best classification accuracies of 68.5 percent for three labels of valence and 76.4 percent for three labels of arousal were obtained using a modality fusion strategy and a support vector machine. The results over a population of 24 participants demonstrate that user-independent emotion recognition can outperform individual self-reports for arousal assessments and do not underperform for valence assessments.
引用
收藏
页码:211 / 223
页数:13
相关论文
共 50 条
  • [31] Emotion Recognition Using Multimodal Deep Learning
    Liu, Wei
    Zheng, Wei-Long
    Lu, Bao-Liang
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT II, 2016, 9948 : 521 - 529
  • [32] Multimodal Emotion Recognition with Auxiliary Sentiment Information
    Wu L.
    Liu Q.
    Zhang D.
    Wang J.
    Li S.
    Zhou G.
    Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2020, 56 (01): : 75 - 81
  • [33] Correlated Attention Networks for Multimodal Emotion Recognition
    Qiu, Jie-Lin
    Li, Xiao-Yu
    Hu, Kai
    PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 2656 - 2660
  • [34] Multimodal Emotion Recognition Based on Feature Fusion
    Xu, Yurui
    Wu, Xiao
    Su, Hang
    Liu, Xiaorui
    2022 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2022), 2022, : 7 - 11
  • [35] MULTIMODAL TRANSFORMER FUSION FOR CONTINUOUS EMOTION RECOGNITION
    Huang, Jian
    Tao, Jianhua
    Liu, Bin
    Lian, Zheng
    Niu, Mingyue
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3507 - 3511
  • [36] Multimodal sentiment and emotion recognition in hyperbolic space
    Arano, Keith April
    Orsenigo, Carlotta
    Soto, Mauricio
    Vercellis, Carlo
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 184
  • [37] Multimodal Emotion Recognition for Human Robot Interaction
    Adiga, Sharvari
    Vaishnavi, D. V.
    Saxena, Suchitra
    ShikhaTripathi
    2020 7TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE (ISCMI 2020), 2020, : 197 - 203
  • [38] Leveraging Label Information for Multimodal Emotion Recognition
    Wang, Peiying
    Zeng, Sunlu
    Chen, Junqing
    Fan, Lu
    Chen, Meng
    Wu, Youzheng
    He, Xiaodong
    INTERSPEECH 2023, 2023, : 4219 - 4223
  • [39] Multimodal Emotion Recognition With Temporal and Semantic Consistency
    Chen, Bingzhi
    Cao, Qi
    Hou, Mixiao
    Zhang, Zheng
    Lu, Guangming
    Zhang, David
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 3592 - 3603
  • [40] Multimodal Emotion Recognition for AVEC 2016 Challenge
    Povolny, Filip
    Matejka, Pavel
    Hradis, Michal
    Popkova, Anna
    Otrusina, Lubomir
    Smrz, Pavel
    PROCEEDINGS OF THE 6TH INTERNATIONAL WORKSHOP ON AUDIO/VISUAL EMOTION CHALLENGE (AVEC'16), 2016, : 75 - 81