Toward Children's Empathy Ability Analysis: Joint Facial Expression Recognition and Intensity Estimation Using Label Distribution Learning

被引:22
|
作者
Chen, Jingying [1 ,2 ]
Guo, Chen [1 ,2 ]
Xu, Ruyi [1 ,2 ]
Zhang, Kun [1 ,2 ]
Yang, Zongkai [1 ,2 ]
Liu, Honghai [3 ,4 ]
机构
[1] Cent China Normal Univ, Natl Engn Lab Educ Big Data, Wuhan 430079, Peoples R China
[2] Cent China Normal Univ, Natl Engn Res Ctr E Learning, Wuhan 430079, Peoples R China
[3] Harbin Inst Technol Shenzhen, State Key Lab Robot & Syst, Shenzhen 518055, Peoples R China
[4] Univ Portsmouth, Portsmouth PO1 2UP, Hants, England
关键词
Face recognition; Estimation; Task analysis; Interpolation; Annotations; Pediatrics; Informatics; Empathy ability analytics; expression intensity estimation; facial expression recognition; intensity label distribution; Siamese-like convolutional neural network (CNN); RESPONSES;
D O I
10.1109/TII.2021.3075989
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Empathy ability is one of the most important social communication skills in early childhood development. To analyze the children's empathy ability, facial expression analysis (FEA) is an effective way due to its ability to understand children's emotional states. Previous works mainly focus on recognizing the facial expression categories yet fail to estimate expression intensity, the latter of which is more important for fine-grained emotion analysis. To this end, this article first proposes to analyze children's empathy ability with both the categories and the intensities of facial expressions. A novel FEA method based on intensity label distribution learning is presented, which aims to recognize expression categories and estimate their intensity levels in an end-to-end framework. First, the intensity label distribution is generated for each frame in the expression sequence using a linear interpolation estimation and a Gaussian function to address the lack of reasonable annotations for expression intensity. Then, the extended intensity label distribution is presented to automatically encode the expression intensity in a multidimensional expression space, which aims to integrate the expression recognition and intensity estimation into a unified framework as well as boost the expression recognition performance by suppressing the variations in appearance caused by intensity and by emphasizing those variations among weak expressions. Finally, a Siamese-like convolutional neural network is presented to learn the expression model from a pair of frames that includes an expressive frame and its corresponding neutral frame using the extended intensity label distribution as the supervised information, thus effectively eliminating the expression-unrelated information's influence on FEA. Numerous experiments validate that the proposed method is promising in analysis of the differences in empathy ability between typically developing children and children with autism spectrum disorder.
引用
收藏
页码:16 / 25
页数:10
相关论文
共 35 条
  • [31] Blended Emotion in-the-Wild: Multi-label Facial Expression Recognition Using Crowdsourced Annotations and Deep Locality Feature Learning
    Shan Li
    Weihong Deng
    [J]. International Journal of Computer Vision, 2019, 127 : 884 - 906
  • [32] Blended Emotion in-the-Wild: Multi-label Facial Expression Recognition Using Crowdsourced Annotations and Deep Locality Feature Learning
    Li, Shan
    Deng, Weihong
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2019, 127 (6-7) : 884 - 906
  • [33] An Unsupervised Learning Approach for Facial Expression Recognition using Semi-Definite Programming and Generalized Principal Component Analysis
    Gholami, Behnood
    Haddad, Wassim M.
    Tannenbaum, Allen R.
    [J]. IMAGE PROCESSING: ALGORITHMS AND SYSTEMS VIII, 2010, 7532
  • [34] Multi-task Facial Activity Patterns Learning for micro-expression recognition using Joint Temporal Local Cube Binary Pattern
    Cen, Shixin
    Yu, Yang
    Yan, Gang
    Yu, Ming
    Guo, Yuqiang
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2022, 103
  • [35] Exploring the relationship between children's facial emotion processing characteristics and speech communication ability using deep learning on eye tracking and speech performance measures
    Yang, Jingwen
    Chen, Zelin
    Qiu, Guoxin
    Li, Xiangyu
    Li, Caixia
    Yang, Kexin
    Chen, Zhuanggui
    Gao, Leyan
    Lu, Shuo
    [J]. COMPUTER SPEECH AND LANGUAGE, 2022, 76