Feature-level and Model-level Audiovisual Fusion for Emotion Recognition in the Wild

被引:26
|
作者
Cai, Jie [1 ]
Meng, Zibo [2 ]
Khan, Ahmed Shehab [1 ]
Li, Zhiyuan [1 ]
O'Reilly, James [1 ]
Han, Shizhong [3 ]
Liu, Ping [4 ]
Chen, Min [5 ]
Tong, Yan [1 ]
机构
[1] Univ South Carolina, Dept Comp Sci & Engn, Columbia, SC 29208 USA
[2] Innopeak Technol Inc, Palo Alto, CA USA
[3] 12 Sigma Technol, San Diego, CA USA
[4] JD Com Inc, Beijing, Peoples R China
[5] Univ Washington, Bothell, WA USA
基金
美国国家科学基金会;
关键词
AUDIO;
D O I
10.1109/MIPR.2019.00089
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Emotion recognition plays an important role in human-computer interaction (HCI) and has been extensively studied for decades. Although tremendous improvements have been achieved for posed expressions, recognizing human emotions in "close-to-real-world" environments remains a challenge. In this paper, we proposed two strategies to fuse information extracted from different modalities, i.e., audio and visual. Specifically, we utilized LBP-TOP, an ensemble of CNNs, and a bi-directional LSTM (BLSTM) to extract features from the visual channel and the OpenSmile toolkit to extract features from the audio channel, respectively. Two kinds of fusion methods, i,e., feature-level fusion and model-level fusion, were developed to utilize the information extracted from the two channels. Experimental results on the EmotiW2018 AFEW dataset have shown that the proposed fusion methods outperform the baseline methods significantly and achieve comparable performance compared with the state-of-the-art methods, where the model-level fusion performs better when one of the channels totally fails.
引用
收藏
页码:443 / 448
页数:6
相关论文
共 50 条
  • [1] Combining feature-level and decision-level fusion in a hierarchical classifier for emotion recognition in the wild
    Sun, Bo
    Li, Liandong
    Wu, Xuewen
    Zuo, Tian
    Chen, Ying
    Zhou, Guoyan
    He, Jun
    Zhu, Xiaoming
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2016, 10 (02) : 125 - 137
  • [2] Combining feature-level and decision-level fusion in a hierarchical classifier for emotion recognition in the wild
    Bo Sun
    Liandong Li
    Xuewen Wu
    Tian Zuo
    Ying Chen
    Guoyan Zhou
    Jun He
    Xiaoming Zhu
    [J]. Journal on Multimodal User Interfaces, 2016, 10 : 125 - 137
  • [3] Feature-Level Fusion of Multimodal Physiological Signals for Emotion Recognition
    Chen, Jing
    Ru, Bin
    Xu, Lixin
    Moore, Philip
    Su, Yun
    [J]. PROCEEDINGS 2015 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2015, : 395 - 399
  • [4] An Investigation of a Feature-Level Fusion for Noisy Speech Emotion Recognition
    Sekkate, Sara
    Khalil, Mohammed
    Adib, Abdellah
    Ben Jebara, Sofia
    [J]. COMPUTERS, 2019, 8 (04)
  • [5] Multimodal Emotion Recognition Framework Using a Decision-Level Fusion and Feature-Level Fusion Approach
    Devi, C. Akalya
    Renuka, D.
    [J]. IETE JOURNAL OF RESEARCH, 2023, 69 (12) : 8909 - 8920
  • [6] RPROP Algorithm in Feature-Level Fusion Recognition
    Liu Hui-min
    Li Xiang
    Wang Hong-qiang
    Fu Yao-wen
    Shen Rong-jun
    [J]. 2008 CHINESE CONTROL AND DECISION CONFERENCE, VOLS 1-11, 2008, : 764 - +
  • [7] Action Recognition Based on Feature-level Fusion
    Cheng, Wanli
    Chen, Enqing
    [J]. TENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2018), 2018, 10806
  • [8] An efficient model-level fusion approach for continuous affect recognition from audiovisual signals
    Pei, Ercheng
    Jiang, Dongmei
    Sahli, Hichem
    [J]. NEUROCOMPUTING, 2020, 376 : 42 - 53
  • [9] Speech emotion classification using feature-level and classifier-level fusion
    Mishra, Siba Prasad
    Warule, Pankaj
    Deb, Suman
    [J]. EVOLVING SYSTEMS, 2024, 15 (02) : 541 - 554
  • [10] Combined CNN LSTM with attention for speech emotion recognition based on feature-level fusion
    Liu, Yanlin
    Chen, Aibin
    Zhou, Guoxiong
    Yi, Jizheng
    Xiang, Jin
    Wang, Yaru
    [J]. Multimedia Tools and Applications, 2024, 83 (21) : 59839 - 59859