Multimodal Attention Dynamic Fusion Network for Facial Micro-Expression Recognition

被引:1
|
作者
Yang, Hongling [1 ]
Xie, Lun [2 ]
Pan, Hang [1 ]
Li, Chiqin [2 ]
Wang, Zhiliang [2 ]
Zhong, Jialiang [3 ]
机构
[1] Changzhi Univ, Dept Comp Sci, Changzhi 046011, Peoples R China
[2] Univ Sci & Technol Beijing, Sch Comp & Commun Engn, Beijing 100083, Peoples R China
[3] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Peoples R China
基金
国家重点研发计划; 北京市自然科学基金;
关键词
micro-expression recognition; learnable class token; dynamic fusion;
D O I
10.3390/e25091246
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The emotional changes in facial micro-expressions are combinations of action units. The researchers have revealed that action units can be used as additional auxiliary data to improve facial micro-expression recognition. Most of the researchers attempt to fuse image features and action unit information. However, these works ignore the impact of action units on the facial image feature extraction process. Therefore, this paper proposes a local detail feature enhancement model based on a multimodal dynamic attention fusion network (MADFN) method for micro-expression recognition. This method uses a masked autoencoder based on learnable class tokens to remove local areas with low emotional expression ability in micro-expression images. Then, we utilize the action unit dynamic fusion module to fuse action unit representation to improve the potential representation ability of image features. The state-of-the-art performance of our proposed model is evaluated and verified on SMIC, CASME II, SAMM, and their combined 3DB-Combined datasets. The experimental results demonstrated that the proposed model achieved competitive performance with accuracy rates of 81.71%, 82.11%, and 77.21% on SMIC, CASME II, and SAMM datasets, respectively, that show the MADFN model can help to improve the discrimination of facial image emotional features.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Facial micro-expression recognition based on the fusion of deep learning and enhanced optical flow
    Qiuyu Li
    Shu Zhan
    Liangfeng Xu
    Congzhong Wu
    [J]. Multimedia Tools and Applications, 2019, 78 : 29307 - 29322
  • [22] Facial micro-expression recognition based on the fusion of deep learning and enhanced optical flow
    Li, Qiuyu
    Zhan, Shu
    Xu, Liangfeng
    Wu, Congzhong
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (20) : 29307 - 29322
  • [23] AU-assisted Graph Attention Convolutional Network for Micro-Expression Recognition
    Xie, Hong-Xia
    Lo, Ling
    Shuai, Hong-Han
    Cheng, Wen-Huang
    [J]. MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 2871 - 2880
  • [24] Micro-expression recognition from local facial regions
    Aouayeb, Mouath
    Hamidouche, Wassim
    Soladie, Catherine
    Kpalma, Kidiyo
    Seguier, Renaud
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2021, 99
  • [25] Facial micro-expression recognition: A machine learning approach
    Adegun, Iyanu Pelumi
    Vadapalli, Hima Bindu
    [J]. SCIENTIFIC AFRICAN, 2020, 8
  • [26] Multi-channel Capsule Network for Micro-expression Recognition with Multiscale Fusion
    Xie, Zhihua
    Fan, Jiawei
    Cheng, Shijia
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (31) : 76833 - 76850
  • [27] Micro-expression recognition with attention mechanism and region enhancement
    Wang, Yi
    Zheng, Shixin
    Sun, Xiao
    Guo, Dan
    Lang, Junjie
    [J]. MULTIMEDIA SYSTEMS, 2023, 29 (05) : 3095 - 3103
  • [28] Micro-expression recognition with attention mechanism and region enhancement
    Yi Wang
    Shixin Zheng
    Xiao Sun
    Dan Guo
    Junjie Lang
    [J]. Multimedia Systems, 2023, 29 : 3095 - 3103
  • [29] Micro-expression recognition based on differential feature fusion
    Shang, Ziyang
    Wang, Penghai
    Li, Xinfu
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (04) : 11111 - 11126
  • [30] A cascaded spatiotemporal attention network for dynamic facial expression recognition
    Ye, Yaoguang
    Pan, Yongqi
    Liang, Yan
    Pan, Jiahui
    [J]. APPLIED INTELLIGENCE, 2023, 53 (05) : 5402 - 5415