Feature-level and Model-level Audiovisual Fusion for Emotion Recognition in the Wild

被引:26
|
作者
Cai, Jie [1 ]
Meng, Zibo [2 ]
Khan, Ahmed Shehab [1 ]
Li, Zhiyuan [1 ]
O'Reilly, James [1 ]
Han, Shizhong [3 ]
Liu, Ping [4 ]
Chen, Min [5 ]
Tong, Yan [1 ]
机构
[1] Univ South Carolina, Dept Comp Sci & Engn, Columbia, SC 29208 USA
[2] Innopeak Technol Inc, Palo Alto, CA USA
[3] 12 Sigma Technol, San Diego, CA USA
[4] JD Com Inc, Beijing, Peoples R China
[5] Univ Washington, Bothell, WA USA
基金
美国国家科学基金会;
关键词
AUDIO;
D O I
10.1109/MIPR.2019.00089
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Emotion recognition plays an important role in human-computer interaction (HCI) and has been extensively studied for decades. Although tremendous improvements have been achieved for posed expressions, recognizing human emotions in "close-to-real-world" environments remains a challenge. In this paper, we proposed two strategies to fuse information extracted from different modalities, i.e., audio and visual. Specifically, we utilized LBP-TOP, an ensemble of CNNs, and a bi-directional LSTM (BLSTM) to extract features from the visual channel and the OpenSmile toolkit to extract features from the audio channel, respectively. Two kinds of fusion methods, i,e., feature-level fusion and model-level fusion, were developed to utilize the information extracted from the two channels. Experimental results on the EmotiW2018 AFEW dataset have shown that the proposed fusion methods outperform the baseline methods significantly and achieve comparable performance compared with the state-of-the-art methods, where the model-level fusion performs better when one of the channels totally fails.
引用
收藏
页码:443 / 448
页数:6
相关论文
共 50 条
  • [41] Deep learning based multimodal emotion recognition using model-level fusion of audio-visual modalities
    Middya, Asif Iqbal
    Nag, Baibhav
    Roy, Sarbani
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 244
  • [42] Alternative Deep Learning Architectures for Feature-Level Fusion in Human Activity Recognition
    Julien Maitre
    Kevin Bouchard
    Sébastien Gaboury
    [J]. Mobile Networks and Applications, 2021, 26 : 2076 - 2086
  • [43] Accurate Human Recognition by Score-Level and Feature-Level Fusion Using Palm–Phalanges Print
    Smriti Gopal
    [J]. Arabian Journal for Science and Engineering, 2018, 43 : 543 - 554
  • [44] Feature-Level Fusion of Iris and Face for Personal Identification
    Wang, Zhifang
    Han, Qi
    Niu, Xiamu
    Busch, Christoph
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2009, PT 3, PROCEEDINGS, 2009, 5553 : 356 - +
  • [45] Real-time Action Recognition by Feature-level Fusion of Depth and Inertial Sensor
    Li, Yi
    Cheng, Jun
    Ji, Xiaopeng
    Feng, Wei
    Tao, Dapeng
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON REAL-TIME COMPUTING AND ROBOTICS (RCAR), 2017, : 109 - 114
  • [46] Feature-level fusion recognition based on complex-valued independent component analysis
    Changchun Institute of Optics Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
    不详
    [J]. Guangxue Jingmi Gongcheng, 2009, 8 (2024-2031): : 2024 - 2031
  • [47] Accurate Human Recognition by Score-Level and Feature-Level Fusion Using Palm-Phalanges Print
    Gopal
    Srivastava, Smriti
    [J]. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2018, 43 (02) : 543 - 554
  • [48] Feature-Level Fusion of Surface Electromyography for Activity Monitoring
    Xi, Xugang
    Tang, Minyan
    Luo, Zhizeng
    [J]. SENSORS, 2018, 18 (02):
  • [49] An investigation into feature-level fusion of face and fingerprint biometrics
    Computer Vision Laboratory, Department of Architecture and Planning , University of Sassari, Palazzo del Pou Salit Piazza Duomo 6, Alghero
    07041, Italy
    不详
    07041, Italy
    [J]. Multibiometrics for Hum. Identif., (120-142):
  • [50] Audiovisual emotion recognition in wild
    Egils Avots
    Tomasz Sapiński
    Maie Bachmann
    Dorota Kamińska
    [J]. Machine Vision and Applications, 2019, 30 : 975 - 985