Multimodal Emotion Recognition in Response to Videos

被引:457
|
作者
Soleymani, Mohammad [1 ]
Pantic, Maja [2 ,3 ]
Pun, Thierry [1 ]
机构
[1] Univ Geneva, Dept Comp Sci, Comp Vis & Multimedia Lab, CH-1227 Carouge, GE, Switzerland
[2] Univ London Imperial Coll Sci Technol & Med, Dept Comp, London SW7 2AZ, England
[3] Univ Twente, Fac Elect Engn Math & Comp Sci, NL-7522 NB Enschede, Netherlands
基金
欧洲研究理事会; 瑞士国家科学基金会;
关键词
Emotion recognition; EEG; pupillary reflex; pattern classification; affective computing; PUPIL LIGHT REFLEX; CLASSIFICATION; OSCILLATIONS; SYSTEMS;
D O I
10.1109/T-AFFC.2011.37
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a user-independent emotion recognition method with the goal of recovering affective tags for videos using electroencephalogram (EEG), pupillary response and gaze distance. We first selected 20 video clips with extrinsic emotional content from movies and online resources. Then, EEG responses and eye gaze data were recorded from 24 participants while watching emotional video clips. Ground truth was defined based on the median arousal and valence scores given to clips in a preliminary study using an online questionnaire. Based on the participants' responses, three classes for each dimension were defined. The arousal classes were calm, medium aroused, and activated and the valence classes were unpleasant, neutral, and pleasant. One of the three affective labels of either valence or arousal was determined by classification of bodily responses. A one-participant-out cross validation was employed to investigate the classification performance in a user-independent approach. The best classification accuracies of 68.5 percent for three labels of valence and 76.4 percent for three labels of arousal were obtained using a modality fusion strategy and a support vector machine. The results over a population of 24 participants demonstrate that user-independent emotion recognition can outperform individual self-reports for arousal assessments and do not underperform for valence assessments.
引用
收藏
页码:211 / 223
页数:13
相关论文
共 50 条
  • [41] Multimodal Decomposition for Enhanced Subtle Emotion Recognition
    Lee, Zun-Ci
    Phan, Raphael C. -W.
    Tan, Su-Wei
    Lee, Kuan-Heng
    2017 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC 2017), 2017, : 665 - 671
  • [42] Multimodal Emotion Recognition Using Deep Networks
    Fadil, C.
    Alvarez, R.
    Martinez, C.
    Goddard, J.
    Rufiner, H.
    VI LATIN AMERICAN CONGRESS ON BIOMEDICAL ENGINEERING (CLAIB 2014), 2014, 49 : 813 - 816
  • [43] Use of multimodal information in facial emotion recognition
    De Silva, LC
    Miyasato, T
    Nakatsu, R
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 1998, E81D (01) : 105 - 114
  • [44] Emotion Recognition on Multimodal with Deep Learning and Ensemble
    Dharma, David Adi
    Zahra, Amalia
    International Journal of Advanced Computer Science and Applications, 2022, 13 (12): : 656 - 663
  • [45] MEC 2017: Multimodal Emotion Recognition Challenge
    Li, Ya
    Tao, Jianhua
    Schuller, Bjoern
    Shan, Shiguang
    Jiang, Dongmei
    Jia, Jia
    2018 FIRST ASIAN CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII ASIA), 2018,
  • [46] Emotion Recognition on Multimodal with Deep Learning and Ensemble
    Dharma, David Adi
    Zahra, Amalia
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (12) : 656 - 663
  • [47] Towards the explainability of Multimodal Speech Emotion Recognition
    Kumar, Puneet
    Kaushik, Vishesh
    Raman, Balasubramanian
    INTERSPEECH 2021, 2021, : 1748 - 1752
  • [48] Multimodal Emotion Recognition in Conversation Based on Hypergraphs
    Li, Jiaze
    Mei, Hongyan
    Jia, Liyun
    Zhang, Xing
    ELECTRONICS, 2023, 12 (22)
  • [49] WiFi and Vision enabled Multimodal Emotion Recognition
    Hou, Yuanwei
    Zhang, Xiang
    Gu, Yu
    Li, Weiping
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 3034 - 3039
  • [50] Interactive Robot Learning for Multimodal Emotion Recognition
    Yu, Chuang
    Tapus, Adriana
    SOCIAL ROBOTICS, ICSR 2019, 2019, 11876 : 633 - 642