Recognition of facial actions and their temporal segments based on duration models

被引:0
|
作者
Isabel Gonzalez
Francesco Cartella
Valentin Enescu
Hichem Sahli
机构
[1] Vrije Universiteit Brussel (VUB),Department Electronics and Informatics, VUB
[2] Interuniveristy Microelectronics Center (IMEC),NPU Joint AVSP Lab
来源
关键词
Facial action units (AUs); Hidden semi-Markov models (HSMMs); Variable duration semi-Markov model (VDHMM);
D O I
暂无
中图分类号
学科分类号
摘要
Being able to automatically analyze finegrained changes in facial expression into action units (AUs), of the Facial Action Coding System (FACS), and their temporal models (i.e., sequences of temporal phases, neutral, onset, apex, and offset), in face videos would greatly benefit for facial expression recognition systems. Previous works, considered combining, per AU, a discriminative frame-based Support Vector Machine (SVM) and a dynamic generative Hidden Markov Models (HMM), to detect the presence of the AU in question and its temporal segments in an input image sequence. The major drawback of HMMs, is that they do not model well time dependent dynamics as the ones of AUs, especially when dealing with spontaneous expressions. To alleviate this problem, in this paper, we exploit efficient duration modeling of the temporal behavior of AUs, and we propose hidden semi-Markov model (HSMM) and variable duration semi-Markov model (VDHMM) to recognize the dynamics of AU’s. Such models allow the parameterization and inference of the AU’s state duration distributions. Within our system, geometrical and appearance based measurements, as well as their first derivatives, modeling both the dynamics and the appearance of AUs, are applied to pair-wise SVM classifiers for a frame-based classification. The output of which are then fed as evidence to the HSMM or VDHMM for inferring AUs temporal phases. A thorough investigation into the aspect of duration modeling and its application to AU recognition through extensive comparison to state-of-art SVM-HMM approaches are presented. For comparison, an average recognition rate of 64.83 % and 64.66 % is achieved for the HSMM and VDHMM respectively. Our framework has several benefits: (1) it models the AU’s temporal phases duration; (2) it does not require any assumption about the underlying structure of the AU events, and (3) compared to HMM, the proposed HSMM and VDHMM duration models reduce the duration error of the temporal phases of an AU, and they are especially better in recognizing the offset ending of an AU.
引用
收藏
页码:10001 / 10024
页数:23
相关论文
共 50 条
  • [41] Role of temporal processing stages by inferior temporal neurons in facial recognition
    Sugase-Miyamoto, Yasuko
    Matsumoto, Narihisa
    Kawano, Kenji
    [J]. FRONTIERS IN PSYCHOLOGY, 2011, 2
  • [42] Research on Depression Recognition Based on University Students' Facial Expressions and Actions with the Assistance of Artificial Intelligence
    Cheng, Xiaohong
    [J]. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2024, 28 (05) : 1126 - 1131
  • [43] Facial animation based on recognition of facial parts
    Watanabe, Nobuyuki
    Ihara, Masayuki
    Teranishi, Kyoko
    Nakagawa, Masayuki
    [J]. NTT R and D, 1998, 47 (12): : 1217 - 1224
  • [44] Hierarchical recognition of daily human actions based on continuous hidden Markov models
    Mori, T
    Segawa, Y
    Shimosaka, M
    Sato, T
    [J]. SIXTH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, PROCEEDINGS, 2004, : 779 - 784
  • [45] A Deep Spatial and Temporal Aggregation Framework for Video-Based Facial Expression Recognition
    Pan, Xianzhang
    Ying, Guoliang
    Chen, Guodong
    Li, Hongming
    Li, Wenshu
    [J]. IEEE ACCESS, 2019, 7 : 48807 - 48815
  • [46] Deep Temporal-Spatial Aggregation for Video-Based Facial Expression Recognition
    Pan, Xianzhang
    Guo, Wenping
    Guo, Xiaoying
    Li, Wenshu
    Xu, Junjie
    Wu, Jinzhao
    [J]. SYMMETRY-BASEL, 2019, 11 (01):
  • [47] Spatio-temporal deep forest for emotion recognition based on facial electromyography signals
    Xu, Muhua
    Cheng, Juan
    Li, Chang
    Liu, Yu
    Chen, Xun
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 156
  • [48] Facial expression recognition: age, gender and exposure duration impact
    Boloorizadeh, Padideh
    Tojari, Farshad
    [J]. 3RD WORLD CONFERENCE ON PSYCHOLOGY, COUNSELING AND GUIDANCE, WCPCG-2012, 2013, 84 : 1369 - 1375
  • [49] Facial-expression recognition based on a low-dimensional temporal feature space
    Taoufik Ben Abdallah
    Radhouane Guermazi
    Mohamed Hammami
    [J]. Multimedia Tools and Applications, 2018, 77 : 19455 - 19479
  • [50] Deep-Learning-Based Stress Recognition with Spatial-Temporal Facial Information
    Jeon, Taejae
    Bae, Han Byeol
    Lee, Yongju
    Jang, Sungjun
    Lee, Sangyoun
    [J]. SENSORS, 2021, 21 (22)