Facial Action Unit Recognition and Intensity Estimation Enhanced Through Label Dependencies

被引:14
|
作者
Wang, Shangfei [1 ,2 ]
Hao, Longfei [3 ]
Ji, Qiang [4 ]
机构
[1] Univ Sci & Technol China, Key Lab Comp & Commun Software Anhui Prov, Sch Comp Sci & Technol, Hefei 230027, Anhui, Peoples R China
[2] Univ Sci & Technol China, Sch Data Sci, Hefei 230027, Anhui, Peoples R China
[3] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230027, Anhui, Peoples R China
[4] Rensselaer Polytech Inst, Dept Elect Comp & Syst Engn, Troy, NY 12180 USA
基金
美国国家科学基金会;
关键词
AU recognition; AU intensity estimation; latent regression Bayesian network; label dependencies; EXPRESSION; EMOTION;
D O I
10.1109/TIP.2018.2878339
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The inherent dependencies among facial action units (AUs) caused by the underlying anatomic mechanism are essential for the proper recognition of AUs and the estimation of intensity levels, but they have not been exploited to their full potential. We are proposing novel methods to recognize AUs and estimate intensity via hybrid Bayesian networks (BNs). The upper two layers are latent regression BNs (LRBNs), and the lower layers are BNs. The visible nodes of the LRBN layers are the representations of ground-truth AU occurrences or AU intensities. Through the directed connections from latent layer and visible layer, an LRBN can successfully represent relationships between multiple AUs or AU intensities. The lower layers include BNs with two nodes for AU recognition, and BNs with three nodes for AU intensity estimation. The bottom layers incorporate measurements from facial images with AU dependencies for intensity estimation and AU recognition. Efficient learning algorithms of the hybrid Bayesian networks are proposed for AU recognition as well as intensity estimation. Furthermore, the proposed hybrid BN models are extended for facial expression-assisted AU recognition and intensity estimation, as AU relationships are closely related to facial expressions. We test our methods on three benchmark databases for AU recognition and two benchmark databases for intensity estimation. The results demonstrate that the proposed approaches faithfully model the complex and global inherent AU dependencies, and the expression labels available only during training can boost the estimation of AU dependencies for both AU recognition and intensity estimation.
引用
收藏
页码:1428 / 1442
页数:15
相关论文
共 50 条
  • [1] Facial Action Unit Recognition Augmented by Their Dependencies
    Hao, Longfei
    Wang, Shangfei
    Peng, Guozhu
    Ji, Qiang
    [J]. PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, : 187 - 194
  • [2] Deep Facial Action Unit Recognition and Intensity Estimation from Partially Labelled Data
    Wang, Shangfei
    Pan, Bowen
    Wu, Shan
    Ji, Qiang
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2021, 12 (04) : 1018 - 1030
  • [3] Generalizing to Unseen Head Poses in Facial Expression Recognition and Action Unit Intensity Estimation
    Werner, Philipp
    Saxen, Frerk
    Al-Hamadi, Ayoub
    Yu, Hui
    [J]. 2019 14TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2019), 2019, : 130 - 137
  • [4] Multiple Facial Action Unit Recognition Enhanced by Facial Expressions
    Yang, Jiajia
    Wu, Shan
    Wang, Shangfei
    Ji, Qiang
    [J]. 2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 4089 - 4094
  • [5] Feature and label relation modeling for multiple-facial action unit classification and intensity estimation
    Wang, Shangfei
    Yang, Jiajia
    Gao, Zhen
    Ji, Qiang
    [J]. PATTERN RECOGNITION, 2017, 65 : 71 - 81
  • [6] Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks
    He, Jun
    Yu, Xiaocui
    Sun, Bo
    Yu, Lejun
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2021, 15 (04) : 429 - 440
  • [7] Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks
    Jun He
    Xiaocui Yu
    Bo Sun
    Lejun Yu
    [J]. Journal on Multimodal User Interfaces, 2021, 15 : 429 - 440
  • [8] Deep Structured Learning for Facial Action Unit Intensity Estimation
    Walecki, Robert
    Rudovic, Ognjen
    Pavlovic, Vladimir
    Schuller, Bjoern
    Pantic, Maja
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5709 - 5718
  • [9] Context-Aware Feature and Label Fusion for Facial Action Unit Intensity Estimation With Partially Labeled Data
    Zhang, Yong
    Jiang, Haiyong
    Wu, Baoyuan
    Fan, Yanbo
    Ji, Qiang
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 733 - 742
  • [10] Dynamic Probabilistic Graph Convolution for Facial Action Unit Intensity Estimation
    Song, Tengfei
    Cui, Zijun
    Wang, Yuru
    Zheng, Wenming
    Ji, Qiang
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 4843 - 4852