Dual-Branch Residual Disentangled Adversarial Learning Network for Facial Expression Recognition

被引:0
|
作者
Chen, Puhua [1 ]
Wang, Zhe [1 ]
Mao, Shasha [1 ]
Hui, Xinyue [1 ]
Ning, Huyan [2 ]
机构
[1] Xidian Univ, Int Res Ctr Intelligent Percept & Computat, Sch Artificial Intelligence, Key Lab Intelligent Percept & Image Understanding,, Xian 710071, Peoples R China
[2] Shanghai AI Lab Sense Time Technol, Shanghai 2000233, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Face recognition; Training; Facial features; Adversarial machine learning; Loss measurement; Testing; Facial expression recognition; feature disentanglement; adversarial training;
D O I
10.1109/LSP.2024.3390987
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The facial expression recognition is very important for human-computer interaction. Therefore, a large number of researchers are focusing on this topic research and have acquired many valuable research achievements. However, there still exist many problems that need to be solved for practical applications, such as the impact of identity and appearance differences, posture change etc. In this work, a dual-branch residual disentangled adversarial learning network is proposed to learn more accurate expression features by disentangling the non-expression features from basic features through a novel combinatorial loss function. In the proposed method, dual-branch network structure is designed, one branch with a D-Net module is utilized to explore non-expression features and another branch just uses subtraction operation to obtain expression features. Based on the above network structure, a novel loss function is constructed to guide the two branches to learn different type features, which contains expression recognition loss, adversarial loss and cosine similarity loss. The main highlight of this work is that the proposed method could achieve the disentanglement of expression features and non-expression features just based on a low-complexity network and expression datasets without other auxiliary data. Finally, abundant experimental results on multiple expression datasets have confirmed the proposed method could obtain better expression recognition results than other state-of-the-art methods.
引用
收藏
页码:1840 / 1844
页数:5
相关论文
共 50 条
  • [1] Facial Expression Recognition With Two-Branch Disentangled Generative Adversarial Network
    Xie, Siyue
    Hu, Haifeng
    Chen, Yizhen
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (06) : 2359 - 2371
  • [2] Facial expression recognition via a jointly-learned dual-branch network
    Bordjiba, Yamina
    Merouani, Hayet Farida
    Azizi, Nabiha
    INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2022, 13 (06) : 447 - 456
  • [3] DISENTANGLED FEATURE BASED ADVERSARIAL LEARNING FOR FACIAL EXPRESSION RECOGNITION
    Bai, Mengchao
    Xie, Weicheng
    Shen, Linlin
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 31 - 35
  • [4] LKRNet: a dual-branch network based on local key regions for facial expression recognition
    Dandan Zhu
    Gangyi Tian
    Liping Zhu
    Wenjie Wang
    Bingyao Wang
    Chengyang Li
    Signal, Image and Video Processing, 2021, 15 : 263 - 270
  • [5] LKRNet: a dual-branch network based on local key regions for facial expression recognition
    Zhu, Dandan
    Tian, Gangyi
    Zhu, Liping
    Wang, Wenjie
    Wang, Bingyao
    Li, Chengyang
    SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (02) : 263 - 270
  • [6] Dual-Branch Multimodal Fusion Network for Driver Facial Emotion Recognition
    Wang, Le
    Chang, Yuchen
    Wang, Kaiping
    Applied Sciences (Switzerland), 2024, 14 (20):
  • [7] Self-cure Dual-branch Network for Facial Expression Recognition Based on Visual Sensors
    School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang
    110159, China
    不详
    Sens. Mater., 11 (4631-4649):
  • [8] D3Net: Dual-Branch Disturbance Disentangling Network for Facial Expression Recognition
    Mo, Rongyun
    Yan, Yan
    Xue, Jing-Hao
    Chen, Si
    Wang, Hanzi
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 779 - 787
  • [9] A dual-branch residual network for inhomogeneous dehazing
    Xu, Yifei
    Li, Jingjing
    Wei, Pingping
    Wang, Aichen
    Rao, Yuan
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 102
  • [10] Dual-ATME: Dual-Branch Attention Network for Micro-Expression Recognition
    Zhou, Haoliang
    Huang, Shucheng
    Li, Jingting
    Wang, Su-Jing
    ENTROPY, 2023, 25 (03)