Multimodal Attention Dynamic Fusion Network for Facial Micro-Expression Recognition

被引:1
|
作者
Yang, Hongling [1 ]
Xie, Lun [2 ]
Pan, Hang [1 ]
Li, Chiqin [2 ]
Wang, Zhiliang [2 ]
Zhong, Jialiang [3 ]
机构
[1] Changzhi Univ, Dept Comp Sci, Changzhi 046011, Peoples R China
[2] Univ Sci & Technol Beijing, Sch Comp & Commun Engn, Beijing 100083, Peoples R China
[3] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Peoples R China
基金
北京市自然科学基金; 国家重点研发计划;
关键词
micro-expression recognition; learnable class token; dynamic fusion;
D O I
10.3390/e25091246
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The emotional changes in facial micro-expressions are combinations of action units. The researchers have revealed that action units can be used as additional auxiliary data to improve facial micro-expression recognition. Most of the researchers attempt to fuse image features and action unit information. However, these works ignore the impact of action units on the facial image feature extraction process. Therefore, this paper proposes a local detail feature enhancement model based on a multimodal dynamic attention fusion network (MADFN) method for micro-expression recognition. This method uses a masked autoencoder based on learnable class tokens to remove local areas with low emotional expression ability in micro-expression images. Then, we utilize the action unit dynamic fusion module to fuse action unit representation to improve the potential representation ability of image features. The state-of-the-art performance of our proposed model is evaluated and verified on SMIC, CASME II, SAMM, and their combined 3DB-Combined datasets. The experimental results demonstrated that the proposed model achieved competitive performance with accuracy rates of 81.71%, 82.11%, and 77.21% on SMIC, CASME II, and SAMM datasets, respectively, that show the MADFN model can help to improve the discrimination of facial image emotional features.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Multi-scale fusion visual attention network for facial micro-expression recognition
    Pan, Hang
    Yang, Hongling
    Xie, Lun
    Wang, Zhiliang
    [J]. FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [2] Multimodal Fusion-based Swin Transformer for Facial Recognition Micro-Expression Recognition
    Zhao, Xinhua
    Lv, Yongjia
    Huang, Zheng
    [J]. PROCEEDINGS OF 2022 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2022), 2022, : 780 - 785
  • [3] Facial micro-expression recognition based on motion magnification network and graph attention mechanism
    Wu, Falin
    Xia, Yu
    Hu, Tiangyang
    Ma, Boyi
    Yang, Jingyao
    Li, Haoxin
    [J]. HELIYON, 2024, 10 (16)
  • [4] PASTFNet: a paralleled attention spatio-temporal fusion network for micro-expression recognition
    Tian, Haichen
    Gong, Weijun
    Li, Wei
    Qian, Yurong
    [J]. MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, 62 (06) : 1911 - 1924
  • [5] PASTFNet: a paralleled attention spatio-temporal fusion network for micro-expression recognition
    Haichen Tian
    Weijun Gong
    Wei Li
    Yurong Qian
    [J]. Medical & Biological Engineering & Computing, 2024, 62 : 1911 - 1924
  • [6] DFME: A New Benchmark for Dynamic Facial Micro-Expression Recognition
    Zhao, Sirui
    Tang, Huaying
    Mao, Xinglong
    Liu, Shifeng
    Zhang, Yiming
    Wang, Hao
    Xu, Tong
    Chen, Enhong
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (03) : 1371 - 1386
  • [7] A survey: facial micro-expression recognition
    Madhumita Takalkar
    Min Xu
    Qiang Wu
    Zenon Chaczko
    [J]. Multimedia Tools and Applications, 2018, 77 : 19301 - 19325
  • [8] Facial Feedback and Micro-Expression Recognition
    Guo, Hui
    He, Lingling
    Wu, Qi
    [J]. INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2016, 51 : 542 - 542
  • [9] Micro-attention for micro-expression recognition
    Wang, Chongyang
    Peng, Min
    Bi, Tao
    Chen, Tong
    [J]. NEUROCOMPUTING, 2020, 410 : 354 - 362
  • [10] A lightweight attention-based network for micro-expression recognition
    Dashuai Hao
    Mu Zhu
    Chen Zhang
    Guan Yuan
    Qiuyan Yan
    Xiaobao Zhuang
    [J]. Multimedia Tools and Applications, 2024, 83 : 29239 - 29260