View-Independent Facial Action Unit Detection

被引:20
|
作者
Tang, Chuangao [1 ]
Zheng, Wenming [1 ]
Yan, Jingwei [1 ]
Li, Qiang [1 ]
Li, Yang [2 ]
Zhang, Tong [2 ]
Cui, Zhen [1 ]
机构
[1] Southeast Univ, Res Ctr Learning Sci, Minist Educ, Key Lab Child Dev & Learning Sci, Nanjing, Jiangsu, Peoples R China
[2] Southeast Univ, Sch Informat Sci & Engn, Nanjing, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/FG.2017.113
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic Facial Action Unit (AU) detection has drawn more and more attention over the past years due to its significance to facial expression analysis. Frontal-view AU detection has been extensively evaluated, but cross-pose AU detection is a less-touched problem due to the scarcity of the related dataset. The challenge of Facial Expression Recognition and Analysis (FERA2017) just released a large-scale video-based AU detection dataset across different facial poses. To deal with this challenging task, we develop a simple and efficient deep learning based system to detect AU occurrence under nine different facial views. In this system, we first crop out facial images by using morphology operations including binary segmentation, connected components labeling and region boundaries extraction, then for each type of AU, we train a corresponding expert network by specifically fine-tuning the VGG-Face network on cross-view facial images, so as to extract more discriminative features for the subsequent binary classification. In the AU detection sub-challenge, our proposed method achieves the mean accuracy of 77.8% (vs. the baseline 56.1%), and promotes the F-1 score to 57.4% (vs. the baseline 45.2%).
引用
收藏
页码:878 / 882
页数:5
相关论文
共 50 条
  • [1] View-independent action recognition: a hybrid approach
    Hashemi, Seyed Mohammad
    Rahmati, Mohammad
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2016, 75 (12) : 6755 - 6775
  • [2] View-independent action recognition: a hybrid approach
    Seyed Mohammad Hashemi
    Mohammad Rahmati
    [J]. Multimedia Tools and Applications, 2016, 75 : 6755 - 6775
  • [3] Homographic active shape models for view-independent facial analysis
    Sukno, FM
    Guerrero, JJ
    Frangi, AF
    [J]. BIOMETRIC TECHNOLOGY FOR HUMAN IDENTIFICATION II, 2005, 5779 : 152 - 163
  • [4] Hierarchical Hough Forests for View-Independent Action Recognition
    Hilsenbeck, Barbara
    Muench, David
    Kieritz, Hilke
    Huebner, Wolfgang
    Arens, Michael
    [J]. 2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 1911 - 1916
  • [5] View-independent human action recognition by action hypersphere in nonlinear subspace
    Zhang, Jian
    Zhuang, Yueting
    [J]. ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2007, 2007, 4810 : 108 - 117
  • [6] View-Independent Human Action Recognition based on a Stereo Camera
    Roh, Myung-Cheol
    Shin, Ho-Keun
    Lee, Seong-Whan
    [J]. PROCEEDINGS OF THE 2009 CHINESE CONFERENCE ON PATTERN RECOGNITION AND THE FIRST CJK JOINT WORKSHOP ON PATTERN RECOGNITION, VOLS 1 AND 2, 2009, : 832 - 836
  • [7] Multi-view dynamic facial action unit detection
    Romero, Andres
    Leon, Juan
    Arbelaez, Pablo
    [J]. IMAGE AND VISION COMPUTING, 2022, 122
  • [8] View-independent object detection using shared local features
    Ko, ByoungChul
    Jung, Ji-Hun
    Nam, Jae-Yeal
    [J]. JOURNAL OF VISUAL LANGUAGES AND COMPUTING, 2015, 28 : 56 - 70
  • [9] View-Independent Behavior Analysis
    Huang, Kaiqi
    Tao, Dacheng
    Yuan, Yuan
    Li, Xuelong
    Tan, Tieniu
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2009, 39 (04): : 1028 - 1035
  • [10] VIEW-INDEPENDENT HUMAN ACTION RECOGNITION BASED ON MULTI-VIEW ACTION IMAGES AND DISCRIMINANT LEARNING
    Iosifidis, Alexandros
    Tefas, Anastasios
    Pitas, Ioannis
    [J]. 2013 IEEE 11TH IVMSP WORKSHOP: 3D IMAGE/VIDEO TECHNOLOGIES AND APPLICATIONS (IVMSP 2013), 2013,