Audiovisual Facial Action Unit Recognition using Feature Level Fusion

被引:4
|
作者
Meng, Zibo [1 ]
Han, Shizhong [1 ]
Chen, Min [2 ]
Tong, Yan [1 ]
机构
[1] Univ South Carolina, Columbia, SC 29208 USA
[2] Univ Washington, Bothell, WA USA
基金
美国国家科学基金会;
关键词
Action Units; Convolutional Neural Network; Facial Action Unit Recognition; Facial Activity; Feature-Level Information Fusion;
D O I
10.4018/IJMDEM.2016010104
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Recognizing facial actions is challenging, especially when they are accompanied with speech. Instead of employing information solely from the visual channel, this work aims to exploit information from both visual and audio channels in recognizing speech-related facial action units (AUs). In this work, two feature-level fusion methods are proposed. The first method is based on a kind of human-crafted visual feature. The other method utilizes visual features learned by a deep convolutional neural network (CNN). For both methods, features are independently extracted from visual and audio channels and aligned to handle the difference in time scales and the time shift between the two signals. These temporally aligned features are integrated via feature-level fusion for AU recognition. Experimental results on a new audiovisual AU-coded dataset have demonstrated that both fusion methods outperform their visual counterparts in recognizing speech-related AUs. The improvement is more impressive with occlusions on the facial images, which would not affect the audio channel.
引用
收藏
页码:60 / 76
页数:17
相关论文
共 50 条
  • [41] An efficient automatic facial expression recognition using local neighborhood feature fusion
    Shanthi, P.
    Nickolas, S.
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (07) : 10187 - 10212
  • [42] An efficient automatic facial expression recognition using local neighborhood feature fusion
    P. Shanthi
    S. Nickolas
    [J]. Multimedia Tools and Applications, 2021, 80 : 10187 - 10212
  • [43] Estimating Sheep Pain Level Using Facial Action Unit Detection
    Lu, Yiting
    Mahmoud, Marwa
    Robinson, Peter
    [J]. 2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017), 2017, : 394 - 399
  • [44] Facial Expression Recognition in Video with Multiple Feature Fusion
    Chen, Junkai
    Chen, Zenghai
    Chi, Zheru
    Fu, Hong
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2018, 9 (01) : 38 - 50
  • [45] A Facial Expression Recognition Algorithm based on Feature Fusion
    Chen, Fengjun
    Wang, Zhiliang
    Xu, Zhengguang
    Wang, Yujie
    Liu, Fang
    [J]. PACIIA: 2008 PACIFIC-ASIA WORKSHOP ON COMPUTATIONAL INTELLIGENCE AND INDUSTRIAL APPLICATION, VOLS 1-3, PROCEEDINGS, 2008, : 364 - 368
  • [46] Fusion of feature sets and classifiers for facial expression recognition
    Zavaschi, Thiago H. H.
    Britto, Alceu S., Jr.
    Oliveira, Luiz E. S.
    Koerich, Alessandro L.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2013, 40 (02) : 646 - 655
  • [47] Feature Fusion of HOG and WLD for Facial Expression Recognition
    Wang, Xiaohua
    Jin, Chao
    Liu, Wei
    Hu, Min
    Xu, Liangfeng
    Ren, Fuji
    [J]. 2013 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2013, : 227 - 232
  • [48] Exploiting EEG Signals and Audiovisual Feature Fusion for Video Emotion Recognition
    Xing, Baixi
    Zhang, Hui
    Zhang, Kejun
    Zhang, Lekai
    Wu, Xinda
    Shi, Xiaoying
    Yu, Shanghai
    Zhang, Sanyuan
    [J]. IEEE ACCESS, 2019, 7 : 59844 - 59861
  • [49] Bayesian Score Level Fusion for Facial Recognition
    Huber, Marco F.
    Merentitis, Andreas
    Heremans, Roel
    Niessen, Maria
    Debes, Christian
    Frangiadakis, Nikolaos
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2016, : 371 - 378
  • [50] CC-CNN: A cross connected convolutional neural network using feature level fusion for facial expression recognition
    Kadimi Naveen Kumar Tataji
    Mukku Nisanth Kartheek
    Munaga V. N. K. Prasad
    [J]. Multimedia Tools and Applications, 2024, 83 : 27619 - 27645