Multi-attention based Deep Neural Network with hybrid features for Dynamic Sequential Facial Expression Recognition

被引:26
|
作者
Sun, Xiao [1 ]
Xia, Pingping [1 ]
Ren, Fuji [1 ]
机构
[1] Hefei Univ Technol, Sch Comp & Informat, Hefei 230009, Peoples R China
关键词
Dynamic sequence facial expression; recognition; FACS; CNNs; Attention mechanism; LOCAL BINARY PATTERNS; SEQUENCES; IMAGE; FACE;
D O I
10.1016/j.neucom.2019.11.127
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In interpersonal communication, the expression is an import way to express one's emotions. In order to make computers understand facial expressions like human beings, a large number of researchers have put a lot of time and energy into it. But for now, most of the work of dynamic sequence facial expression recognition fails to make full use of the combined advantages of shallow features (prior knowledge) and depth features (high-level semantic). Therefore, this paper implements a dynamic sequence facial expression recognition system that integrates shallow features and deep features with the attention mechanism. In order to extract the shallow features, an Attention Shallow Model (ASModel) is proposed by using the relative position of facial landmarks and the texture characteristics of the local area of the face to describe the Action Units of the Facial Action Coding System. And with the advantage of the deep convolutional neural network in expressing high-level features, a Attention Deep Model (ADModel) is also designed to extract deep features on sequence facial images. Finally, the ASModel and the ADModel are integrated to a Multi-attention Shallow and Deep Model (MSDModel) to complete the dynamic sequence facial expression recognition. There are three kinds of attention mechanism introduced, such as Self-Attention (SA), Weight-Attention (WA), and Convolution-Attention (CA). We verify our dynamic expression recognition system on three publicly available databases include CK+, MMI, and OuluCASIA and get superior performance than other state-of-art results. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:378 / 389
页数:12
相关论文
共 50 条
  • [21] Visual attention based composite dense neural network for facial expression recognition
    Shaik N.S.
    Cherukuri T.K.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (12) : 16229 - 16242
  • [22] Facial Expression Recognition Using Deep Neural Network
    Mozaffari, Leila
    Brekke, Marte Marie
    Gajaruban, Brintha
    Purba, Dianike
    Zhang, Jianhua
    2023 3RD INTERNATIONAL CONFERENCE ON APPLIED ARTIFICIAL INTELLIGENCE, ICAPAI, 2023, : 48 - 56
  • [23] Deep Convolutional Neural Network for Facial Expression Recognition
    Zhai, Yikui
    Liu, Jian
    Zeng, Junying
    Piuri, Vincenzo
    Scotti, Fabio
    Ying, Zilu
    Xu, Ying
    Gan, Junying
    IMAGE AND GRAPHICS (ICIG 2017), PT I, 2017, 10666 : 211 - 223
  • [24] Electromyographic hand gesture recognition using convolutional neural network with multi-attention
    Zhang, Zhen
    Shen, Quming
    Wang, Yanyu
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 91
  • [25] Multi pose facial expression recognition based on convolutional neural network
    Feng, Yongliang
    INTERNATIONAL JOURNAL OF BIOMETRICS, 2022, 14 (3-4) : 253 - 267
  • [26] Facial Expression Recognition Based on Multi-dataset Neural Network
    Yang, Bin
    Li, Zhenyu
    Cao, Enguo
    RADIOENGINEERING, 2020, 29 (01) : 259 - 266
  • [27] Facial Expression Recognition Based on a Hybrid Model Combining Deep and Shallow Features
    Xiao Sun
    Man Lv
    Cognitive Computation, 2019, 11 : 587 - 597
  • [28] Facial Expression Recognition Based on a Hybrid Model Combining Deep and Shallow Features
    Sun, Xiao
    Lv, Man
    COGNITIVE COMPUTATION, 2019, 11 (04) : 587 - 597
  • [29] Facial Expression Recognition Based on Multi-Channel Attention Residual Network
    Shen, Tongping
    Xu, Huanqing
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 135 (01): : 539 - 560
  • [30] Multi-Attention Fusion Network for Video-based Emotion Recognition
    Wang, Yanan
    Wu, Jianming
    Hoashi, Keiichiro
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 595 - 601