Cross-Dataset Facial Expression Recognition based on Arousal-Valence Emotion Model and Transfer Learning Method

被引:0
|
作者
Yang, Yong [1 ]
Liu, Chuan [1 ]
Wu, Qingshan [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Computat Intelligence, Chongqing 400065, Peoples R China
关键词
Facial expression recognition; Arousal-valence emotion dimensions; TPCA; Fusion; FACE;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traditional facial expression recognition methods assume that facial expression in the training and testing sets are collected under the same condition such that they are independent and identically distributed. However, the assumption is not satisfied in many real applications. This problem is referred to as cross-dataset facial expression recognition. On the other hand, the traditional facial expression recognition methods are based on basic emotion theory proposed by Ekman. Unfortunately, the theory is limited to express diverse and subtle emotion. To solve the problem of the cross-dataset facial expression recognition and enrich the emotion expression, a transfer learning algorithm TPCA and arousal-valence emotional model are adopted in this paper. A new facial emotion recognition method based on TPCA and two-level fusion is proposed, which combine weight fusion and correlation fusion between arousal and valence to improve the recognition performance under cross-dataset scenarios. The contrast experimental results show that the proposed method can get better recognition result than the traditional methods.
引用
收藏
页码:132 / 138
页数:7
相关论文
共 50 条
  • [1] Transfer subspace learning for cross-dataset facial expression recognition
    Yan, Haibin
    [J]. NEUROCOMPUTING, 2016, 208 : 165 - 173
  • [2] Cross-Dataset Facial Expression Recognition
    Yan, Haibin
    Ang, Marcelo H., Jr.
    Poo, Aun Neow
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2011,
  • [3] A FRACTAL-BASED ALGORITHM OF EMOTION RECOGNITION FROM EEG USING AROUSAL-VALENCE MODEL
    Sourina, Olga
    Liu, Yisi
    [J]. BIOSIGNALS 2011, 2011, : 209 - 214
  • [4] Cross-dataset Deep Transfer Learning for Activity Recognition
    Gjoreski, Martin
    Kalabakov, Stefan
    Lustrek, Mitja
    Gams, Matjaz
    Gjoreski, Hristijan
    [J]. UBICOMP/ISWC'19 ADJUNCT: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2019, : 714 - 718
  • [5] Memory Integrity of CNNs for Cross-Dataset Facial Expression Recognition
    Tannugi, Dylan C.
    Britto, Alceu S., Jr.
    Koerich, Alessandro L.
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 3826 - 3831
  • [6] Cross-dataset emotion recognition from facial expressions through convolutional neural networks
    Dias, William
    Andalo, Fernanda
    Padilha, Rafael
    Bertocco, Gabriel
    Almeida, Waldir
    Costa, Paula
    Rocha, Anderson
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 82
  • [7] Valence-Arousal Model based Emotion Recognition using EEG, peripheral physiological signals and Facial Expression
    Zhu, Qingyang
    Lu, Guanming
    Yan, Jingjie
    [J]. ICMLSC 2020: PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, 2020, : 81 - 85
  • [8] Toward Unbiased Facial Expression Recognition in the Wild via Cross-Dataset Adaptation
    Han, Byungok
    Yun, Woo-Han
    Yoo, Jang-Hee
    Kim, Won Hwa
    [J]. IEEE ACCESS, 2020, 8 : 159172 - 159181
  • [9] A MEMD Method of Human Emotion Recognition Based on Valence-Arousal Model
    He, Yue
    Ai, Qingsong
    Chen, Kun
    [J]. 2017 NINTH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC 2017), VOL 2, 2017, : 399 - 402
  • [10] Sample Self-Revised Network for Cross-Dataset Facial Expression Recognition
    Xu, Xiaolin
    Zheng, Wenming
    Zong, Yuan
    Lu, Cheng
    Jiang, Xingxun
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,