Facial Expression Recognition via Deep Action Units Graph Network Based on Psychological Mechanism

被引:46
|
作者
Liu, Yang [1 ]
Zhang, Xingming [1 ]
Lin, Yubei [2 ]
Wang, Haoxiang [1 ]
机构
[1] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
[2] South China Univ Technol, Sch Software Engn, Guangzhou 510006, Peoples R China
关键词
Psychology; Feature extraction; Gold; Face recognition; Task analysis; Correlation; Face; Action units (AUs); deep graph-based network; facial expression recognition (FER); facial graph representation; psychological mechanism; FACE;
D O I
10.1109/TCDS.2019.2917711
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Facial expression recognition (FER) is currently a very attractive research field in cognitive psychology and artificial intelligence. In this paper, an innovative FER algorithm called deep action units graph network (DAUGN) is proposed based on psychological mechanism. First, a segmentation method is designed to divide the face into small key areas, which are then converted into corresponding AU-related facial expression regions. Second, the local appearance features of these critical regions are extracted for further action units (AUs) analysis. Then, an AUs facial graph is constructed to represent expressions by taking the AU-related regions as vertices and the distances between each two landmarks as edges. Finally, the adjacency matrices of facial graph are put into a graph-based convolutional neural network to combine the local-appearance and global-geometry information, which greatly improving the performance of FER. Experiments and comparisons on CK+, MMI, and SFEW data sets reveal that the DAUGN achieves more competitive results than several other popular approaches.
引用
收藏
页码:311 / 322
页数:12
相关论文
共 50 条
  • [41] Facial expression recognition based on strong attention mechanism and residual network
    Qian, Zhizhe
    Mu, Jing
    Tian, Feng
    Gao, Zhiyu
    Zhang, Jie
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (09) : 14287 - 14306
  • [42] Facial Expression Recognition Using Deep Neural Network
    Mozaffari, Leila
    Brekke, Marte Marie
    Gajaruban, Brintha
    Purba, Dianike
    Zhang, Jianhua
    [J]. 2023 3RD INTERNATIONAL CONFERENCE ON APPLIED ARTIFICIAL INTELLIGENCE, ICAPAI, 2023, : 48 - 56
  • [43] Deep Convolutional Neural Network for Facial Expression Recognition
    Zhai, Yikui
    Liu, Jian
    Zeng, Junying
    Piuri, Vincenzo
    Scotti, Fabio
    Ying, Zilu
    Xu, Ying
    Gan, Junying
    [J]. IMAGE AND GRAPHICS (ICIG 2017), PT I, 2017, 10666 : 211 - 223
  • [44] Facial Landmark-Based Emotion Recognition via Directed Graph Neural Network
    Quang Tran Ngoc
    Lee, Seunghyun
    Song, Byung Cheol
    [J]. ELECTRONICS, 2020, 9 (05)
  • [45] Facial Expression Recognition Video Analysis System based on Facial Action Units A Feasible Engineering Implementation Scheme
    Zhu, Jun
    Wang, Baoqing
    Sun, Weilong
    Dai, Jun
    [J]. 2020 13TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID 2020), 2020, : 238 - 243
  • [46] Facial expression recognition based on landmark-guided graph convolutional neural network
    Meng, Hao
    Yuan, Fei
    Tian, Yang
    Yan, Tianhao
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (02)
  • [47] Deep Structure Inference Network for Facial Action Unit Recognition
    Corneanu, Ciprian
    Madadi, Meysam
    Escalera, Sergio
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 309 - 324
  • [48] Facial Expression Recognition System Based on Deep Residual Fusion Neural Network
    Wang, Haonan
    Ding, Junhang
    Wang, Fan
    Ma, Zhe
    [J]. PROCEEDINGS OF 2019 CHINESE INTELLIGENT AUTOMATION CONFERENCE, 2020, 586 : 138 - 144
  • [49] Facial Expression Recognition Based on Fusion of Local Features and Deep Belief Network
    Wang Linlin
    Liu Jinghao
    Fu Xiaomei
    [J]. LASER & OPTOELECTRONICS PROGRESS, 2018, 55 (01)
  • [50] Image based Static Facial Expression Recognition with Multiple Deep Network Learning
    Yu, Zhiding
    Zhang, Cha
    [J]. ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2015, : 435 - 442