Facial Action Unit Recognition and Intensity Estimation Enhanced Through Label Dependencies

被引:14
|
作者
Wang, Shangfei [1 ,2 ]
Hao, Longfei [3 ]
Ji, Qiang [4 ]
机构
[1] Univ Sci & Technol China, Key Lab Comp & Commun Software Anhui Prov, Sch Comp Sci & Technol, Hefei 230027, Anhui, Peoples R China
[2] Univ Sci & Technol China, Sch Data Sci, Hefei 230027, Anhui, Peoples R China
[3] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230027, Anhui, Peoples R China
[4] Rensselaer Polytech Inst, Dept Elect Comp & Syst Engn, Troy, NY 12180 USA
基金
美国国家科学基金会;
关键词
AU recognition; AU intensity estimation; latent regression Bayesian network; label dependencies; EXPRESSION; EMOTION;
D O I
10.1109/TIP.2018.2878339
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The inherent dependencies among facial action units (AUs) caused by the underlying anatomic mechanism are essential for the proper recognition of AUs and the estimation of intensity levels, but they have not been exploited to their full potential. We are proposing novel methods to recognize AUs and estimate intensity via hybrid Bayesian networks (BNs). The upper two layers are latent regression BNs (LRBNs), and the lower layers are BNs. The visible nodes of the LRBN layers are the representations of ground-truth AU occurrences or AU intensities. Through the directed connections from latent layer and visible layer, an LRBN can successfully represent relationships between multiple AUs or AU intensities. The lower layers include BNs with two nodes for AU recognition, and BNs with three nodes for AU intensity estimation. The bottom layers incorporate measurements from facial images with AU dependencies for intensity estimation and AU recognition. Efficient learning algorithms of the hybrid Bayesian networks are proposed for AU recognition as well as intensity estimation. Furthermore, the proposed hybrid BN models are extended for facial expression-assisted AU recognition and intensity estimation, as AU relationships are closely related to facial expressions. We test our methods on three benchmark databases for AU recognition and two benchmark databases for intensity estimation. The results demonstrate that the proposed approaches faithfully model the complex and global inherent AU dependencies, and the expression labels available only during training can boost the estimation of AU dependencies for both AU recognition and intensity estimation.
引用
收藏
页码:1428 / 1442
页数:15
相关论文
共 50 条
  • [31] FACIAL ACTION UNIT INTENSITY ESTIMATION USING ROTATION INVARIANT FEATURES AND REGRESSION ANALYSIS
    Bingoel, Deniz
    Celik, Turgay
    Omlin, Christian W.
    Vadapalli, Hima B.
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 1381 - 1385
  • [32] Multi-label learning with missing labels for image annotation and facial action unit recognition
    Wu, Baoyuan
    Lyu, Siwei
    Hu, Bao-Gang
    Ji, Qiang
    [J]. PATTERN RECOGNITION, 2015, 48 (07) : 2279 - 2289
  • [33] Joint Patch and Multi-label Learning for Facial Action Unit and Holistic Expression Recognition
    Zhao, Kaili
    Chu, Wen-Sheng
    De la Torre, Fernando
    Cohn, Jeffrey F.
    Zhang, Honggang
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (08) : 3931 - 3946
  • [34] Action Unit Assisted Facial Expression Recognition
    Wang, Fangjun
    Shen, Liping
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: IMAGE PROCESSING, PT III, 2019, 11729 : 385 - 396
  • [35] Multilayer Architectures for Facial Action Unit Recognition
    Wu, Tingfan
    Butko, Nicholas J.
    Ruvolo, Paul
    Whitehill, Jacob
    Bartlett, Marian S.
    Movellan, Javier R.
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2012, 42 (04): : 1027 - 1038
  • [36] Joint facial action unit recognition and self-supervised optical flow estimation
    Shao, Zhiwen
    Zhou, Yong
    Li, Feiran
    Zhu, Hancheng
    Liu, Bing
    [J]. PATTERN RECOGNITION LETTERS, 2024, 181 : 70 - 76
  • [37] Facial expression intensity estimation using label-distribution-learning-enhanced ordinal regression
    Ruyi Xu
    Zhun Wang
    Jingying Chen
    Longpu Zhou
    [J]. Multimedia Systems, 2024, 30
  • [38] Facial expression intensity estimation using label-distribution-learning-enhanced ordinal regression
    Xu, Ruyi
    Wang, Zhun
    Chen, Jingying
    Zhou, Longpu
    [J]. MULTIMEDIA SYSTEMS, 2024, 30 (01)
  • [39] Action unit analysis enhanced facial expression recognition by deep neural network evolution
    Zhi, Ruicong
    Zhou, Caixia
    Li, Tingting
    Liu, Shuai
    Jin, Yi
    [J]. NEUROCOMPUTING, 2021, 425 : 135 - 148
  • [40] Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution
    Fan, Yingruo
    Lam, Jacqueline C. K.
    Li, Victor O. K.
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 12701 - 12708