Continuous AU Intensity Estimation using Localized, Sparse Facial Feature Space

被引:0
|
作者
Jeni, Laszlo A. [1 ]
Girard, Jeffrey M. [2 ]
Cohn, Jeffrey F. [2 ]
De La Torre, Fernando [1 ]
机构
[1] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
[2] Univ Pittsburgh, Dept Psychol, Pittsburgh, PA 15260 USA
关键词
RECOGNITION; PAIN;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most work in automatic facial expression analysis seeks to detect discrete facial actions. Yet, the meaning and function of facial actions often depends in part on their intensity. We propose a part-based, sparse representation for automated measurement of continuous variation in AU intensity. We evaluated its effectiveness in two publically available databases, CK+ and the soon to be released Binghamton high-resolution spontaneous 3D dyadic facial expression database. The former consists of posed facial expressions and ordinal level intensity (absent, low, and high). The latter consists of spontaneous facial expression in response to diverse, well-validated emotion inductions, and 6 ordinal levels of AU intensity. In a preliminary test, we started from discrete emotion labels and ordinal-scale intensity annotation in the CK+ dataset. The algorithm achieved state-of-the-art performance. These preliminary results supported the utility of the part-based, sparse representation. Second, we applied the algorithm to the more demanding task of continuous AU intensity estimation in spontaneous facial behavior in the Binghamton database. Manual 6-point ordinal coding and continuous measurement were highly consistent. Visual analysis of the overlay of continuous measurement by the algorithm and manual ordinal coding strongly supported the representational power of the proposed method to smoothly interpolate across the full range of AU intensity.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] Estimation of class regions in feature space using rough set theory
    Taniguchi, F
    Kudo, M
    Shimbo, M
    FIRST INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, PROCEEDINGS 1997 - KES '97, VOLS 1 AND 2, 1997, : 373 - 377
  • [42] Generation of Emotional Feature Space for Facial Expression Recognition using Self-Mapping
    Ishii, Masaki
    Shimodate, Toshio
    Kageyama, Yoichi
    Takahashi, Tsuyoshi
    Nishida, Makoto
    2012 PROCEEDINGS OF SICE ANNUAL CONFERENCE (SICE), 2012, : 1004 - 1009
  • [43] Illumination chromaticity estimation using inverse-intensity chromaticity space
    Tan, RT
    Nishino, K
    Ikeuchi, K
    2003 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2003, : 673 - 680
  • [44] Heart rate estimation network from facial videos using spatiotemporal feature image
    Jaiswal, Kokila Bharti
    Meenpal, T.
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 151
  • [45] Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set
    Karpouzis, K
    Tsapatsoulis, N
    Kollias, S
    HUMAN VISION AND ELECTRONIC IMAGING V, 2000, 3959 : 443 - 450
  • [46] Automatic Estimation of the Dynamics of Facial Expression using a Three-Level Model of Intensity
    Delannoy, Jane Reilly
    McDonald, John
    2008 8TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2008), VOLS 1 AND 2, 2008, : 594 - 599
  • [47] FACIAL ACTION UNIT INTENSITY ESTIMATION USING ROTATION INVARIANT FEATURES AND REGRESSION ANALYSIS
    Bingoel, Deniz
    Celik, Turgay
    Omlin, Christian W.
    Vadapalli, Hima B.
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 1381 - 1385
  • [48] DOA Estimation Using Sparse Representation of Beamspace and Element-Space Covariance Differencing
    Fujia Xu
    Aifei Liu
    Shengguo Shi
    Song Li
    Ying Li
    Circuits, Systems, and Signal Processing, 2022, 41 : 1596 - 1608
  • [49] DOA Estimation Using Sparse Representation of Beamspace and Element-Space Covariance Differencing
    Xu, Fujia
    Liu, Aifei
    Shi, Shengguo
    Li, Song
    Li, Ying
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2022, 41 (03) : 1596 - 1608
  • [50] Facial expression intensity estimation using label-distribution-learning-enhanced ordinal regression
    Ruyi Xu
    Zhun Wang
    Jingying Chen
    Longpu Zhou
    Multimedia Systems, 2024, 30