Continuous AU Intensity Estimation using Localized, Sparse Facial Feature Space

被引:0
|
作者
Jeni, Laszlo A. [1 ]
Girard, Jeffrey M. [2 ]
Cohn, Jeffrey F. [2 ]
De La Torre, Fernando [1 ]
机构
[1] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
[2] Univ Pittsburgh, Dept Psychol, Pittsburgh, PA 15260 USA
关键词
RECOGNITION; PAIN;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most work in automatic facial expression analysis seeks to detect discrete facial actions. Yet, the meaning and function of facial actions often depends in part on their intensity. We propose a part-based, sparse representation for automated measurement of continuous variation in AU intensity. We evaluated its effectiveness in two publically available databases, CK+ and the soon to be released Binghamton high-resolution spontaneous 3D dyadic facial expression database. The former consists of posed facial expressions and ordinal level intensity (absent, low, and high). The latter consists of spontaneous facial expression in response to diverse, well-validated emotion inductions, and 6 ordinal levels of AU intensity. In a preliminary test, we started from discrete emotion labels and ordinal-scale intensity annotation in the CK+ dataset. The algorithm achieved state-of-the-art performance. These preliminary results supported the utility of the part-based, sparse representation. Second, we applied the algorithm to the more demanding task of continuous AU intensity estimation in spontaneous facial behavior in the Binghamton database. Manual 6-point ordinal coding and continuous measurement were highly consistent. Visual analysis of the overlay of continuous measurement by the algorithm and manual ordinal coding strongly supported the representational power of the proposed method to smoothly interpolate across the full range of AU intensity.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] Feature and label relation modeling for multiple-facial action unit classification and intensity estimation
    Wang, Shangfei
    Yang, Jiajia
    Gao, Zhen
    Ji, Qiang
    PATTERN RECOGNITION, 2017, 65 : 71 - 81
  • [22] Multi-feature-Based Facial Age Estimation Using an Incomplete Facial Aging Database
    Sahoo, Tapan Kumar
    Banka, Haider
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2018, 43 (12) : 8057 - 8078
  • [23] Multi-feature-Based Facial Age Estimation Using an Incomplete Facial Aging Database
    Tapan Kumar Sahoo
    Haider Banka
    Arabian Journal for Science and Engineering, 2018, 43 : 8057 - 8078
  • [24] Efficient face and facial feature tracking using search region estimation
    Direkoglu, C
    Demirel, H
    Özkaramanli, H
    Uyguroglu, M
    IMAGE ANALYSIS AND RECOGNITION, 2005, 3656 : 1149 - 1157
  • [25] Static and Dynamic Approaches for Pain Intensity Estimation using Facial Expressions
    Zebarjadi, Niloufar
    Alikhani, Iman
    PROCEEDINGS OF THE 10TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, VOL 5: HEALTHINF, 2017, : 291 - 296
  • [26] Continuous-Time Intensity Estimation Using Event Cameras
    Scheerlinck, Cedric
    Barnes, Nick
    Mahony, Robert
    COMPUTER VISION - ACCV 2018, PT V, 2019, 11365 : 308 - 324
  • [27] Range Profile Target Recognition Using Sparse Representation Based on Feature Space
    吕文涛
    王军锋
    郁文贤
    包晓敏
    Journal of Shanghai Jiaotong University(Science), 2017, 22 (05) : 615 - 623
  • [28] Range profile target recognition using sparse representation based on feature space
    Lü W.
    Wang J.
    Yu W.
    Bao X.
    Journal of Shanghai Jiaotong University (Science), 2017, 22 (5) : 615 - 623
  • [29] Feature-Assisted Sparse to Dense Motion Estimation using Geodesic Distances
    Ring, Dan
    Pitie, Francois
    2009 13TH INTERNATIONAL MACHINE VISION AND IMAGE PROCESSING CONFERENCE, 2009, : 7 - 12
  • [30] Acoustic DOA estimation using space alternating sparse Bayesian learning
    Zonglong Bai
    Liming Shi
    Jesper Rindom Jensen
    Jinwei Sun
    Mads Græsbøll Christensen
    EURASIP Journal on Audio, Speech, and Music Processing, 2021