Exploring the Contextual Factors Affecting Multimodal Emotion Recognition in Videos

被引:19
|
作者
Bhattacharya, Prasanta [1 ]
Gupta, Raj Kumar [1 ]
Yang, Yinping [1 ]
机构
[1] Agcy Sci Technol & Res STAR, Inst High Performance Comp, Singapore 138632, Singapore
关键词
Emotion recognition; Videos; Visualization; Feature extraction; Physiology; High performance computing; Distance measurement; Affective computing; affect sensing and analysis; modelling human emotions; multi-modal recognition; sentiment analysis; technology & devices for affective computing; SEX-DIFFERENCES; FACIAL EXPRESSIONS; LANGUAGES; SELECTION; MODEL;
D O I
10.1109/TAFFC.2021.3071503
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Emotional expressions form a key part of user behavior on today's digital platforms. While multimodal emotion recognition techniques are gaining research attention, there is a lack of deeper understanding on how visual and non-visual features can be used to better recognize emotions in certain contexts, but not others. This study analyzes the interplay between the effects of multimodal emotion features derived from facial expressions, tone and text in conjunction with two key contextual factors: i) gender of the speaker, and ii) duration of the emotional episode. Using a large public dataset of 2,176 manually annotated YouTube videos, we found that while multimodal features consistently outperformed bimodal and unimodal features, their performance varied significantly across different emotions, gender and duration contexts. Multimodal features performed particularly better for male speakers in recognizing most emotions. Furthermore, multimodal features performed particularly better for shorter than for longer videos in recognizing neutral and happiness, but not sadness and anger. These findings offer new insights towards the development of more context-aware emotion recognition and empathetic systems.
引用
收藏
页码:1547 / 1557
页数:11
相关论文
共 50 条
  • [21] Twosome Modelling based Emotion Recognition in Videos
    Kasraoui, Salma
    Lachiri, Zied
    Madani, Kurosh
    PROCEEDINGS OF THE 2019 10TH IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT DATA ACQUISITION AND ADVANCED COMPUTING SYSTEMS - TECHNOLOGY AND APPLICATIONS (IDAACS), VOL. 1, 2019, : 290 - 293
  • [22] Multimodal emotion recognition and expressivity analysis
    Kollias, S
    Karpouzis, K
    2005 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), VOLS 1 AND 2, 2005, : 779 - 783
  • [23] Multimodal emotion recognition in audiovisual communication
    Schuller, B
    Lang, M
    Rigoll, G
    IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOL I AND II, PROCEEDINGS, 2002, : 745 - 748
  • [24] Multimodal Emotion Recognition (MER) System
    Tang, Kevin
    Tie, Yun
    Yang, Truman
    Guan, Ling
    2014 IEEE 27TH CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2014,
  • [25] Decoupled Multimodal Distilling for Emotion Recognition
    Li, Yong
    Wang, Yuanzhi
    Cui, Zhen
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6631 - 6640
  • [26] Emotion Recognition Based on Multimodal Information
    Zeng, Zhihong
    Pantic, Maja
    Huang, Thomas S.
    AFFECTIVE INFORMATION PROCESSING, 2009, : 241 - +
  • [27] Survey on multimodal approaches to emotion recognition
    Gladys, A. Aruna
    Vetriselvi, V.
    NEUROCOMPUTING, 2023, 556
  • [28] Emotion Recognition Using Multimodal Approach
    Saini, Samiksha
    Rao, Rohan
    Vaichole, Vinit
    Rane, Anand
    Abin, Deepa
    2018 FOURTH INTERNATIONAL CONFERENCE ON COMPUTING COMMUNICATION CONTROL AND AUTOMATION (ICCUBEA), 2018,
  • [29] A robust multimodal approach for emotion recognition
    Song, Mingli
    You, Mingyu
    Li, Na
    Chen, Chun
    NEUROCOMPUTING, 2008, 71 (10-12) : 1913 - 1920
  • [30] Outlier Processing in Multimodal Emotion Recognition
    Zhang, Ge
    Luo, Tianxiang
    Pedrycz, Witold
    El-Meligy, Mohammed A.
    Sharaf, Mohamed Abdel Fattah
    Li, Zhiwu
    IEEE ACCESS, 2020, 8 (08): : 55688 - 55701