Emotion recognition using facial expression by fusing key points descriptor and texture features

被引:17
|
作者
Sharma, Mukta [1 ]
Jalal, Anand Singh [1 ]
Khan, Aamir [1 ]
机构
[1] GLA Univ, Dept Comp Engn & Applicat, Mathura, India
关键词
Emotion recognition; Facial expression; Human-computer-interaction; LOCAL BINARY PATTERNS; CLASSIFICATION; SEQUENCES; SCALE;
D O I
10.1007/s11042-018-7030-1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Emotions have a great significance in human-to-human and in human-to-computer communication and interaction. In this paper, an effective and novel approach to recognize the emotions using facial expressions by the fusion of duplex features is proposed. The proposed approach broadly have three phases, phase-I: ROIs extraction, phase-2:Fusion of duplex features and phase-III: Classification. The proposed approach also gives a novel eye center detection algorithm to detect centres of the eyes. The outcome of the algorithm is further contribute to locate and partition the facial components. The hybrid combination of duplex features also gives the importance of fusion of features over individual features. The proposed approach classify the 5 basic emotions i.e. angry, happy, sad, disgust, surprise. The proposed method also raise the issue of high misclassification rate of emotions in higher age groups (>40) and successfully overcomes it. The proposed approach and its outcome evaluation is validated by using four datasets: the dataset created by us including 2500 images of 5 basic emotions (angry, happy, sad, disgust, surprise) having 500 images per emotions, CK+ dataset, MMI dataset and JAFEE dataset. Experimental results shows that the proposed work significantly improves the recognition rate (approx. 97%, 88%, 86%, 93%) and reduces the misclassification rate (approx.1.4%, 7.6%, 6.6%, 2.7%) even for the subjects of higher age group.
引用
收藏
页码:16195 / 16219
页数:25
相关论文
共 50 条
  • [21] Facial Expression Recognition by Fusing Gabor and Local Binary Pattern Features
    Sun, Yuechuan
    Yu, Jun
    [J]. MULTIMEDIA MODELING, MMM 2017, PT II, 2017, 10133 : 209 - 220
  • [22] Facial expression recognition based on hybrid features and fusing discrete HMMs
    Zhan, Yongzhao
    Zhou, Gengtao
    [J]. VIRTUAL REALITY, PROCEEDINGS, 2007, 4563 : 408 - +
  • [23] Facial Memorability Prediction Fusing Geometric And Texture Features
    Dai, Ziyi
    Pan, Zehua
    Wu, Yewei
    Shen, Linlin
    Hou, Qibin
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION, 2015, : 998 - 1002
  • [24] A Facial Expression Recognition Model Based on Texture and Shape Features
    Li, Aihua
    An, Lei
    Che, Zihui
    [J]. TRAITEMENT DU SIGNAL, 2020, 37 (04) : 627 - 632
  • [25] Facial expression recognition in the wild based on multimodal texture features
    Sun, Bo
    Li, Liandong
    Zhou, Guoyan
    He, Jun
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2016, 25 (06)
  • [26] DYNAMIC TEXTURE AND GEOMETRY FEATURES FOR FACIAL EXPRESSION RECOGNITION IN VIDEO
    Clien, Junkai
    Chen, Zenghai
    Chi, Zheru
    Fu, Hong
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 4967 - 4971
  • [27] Dynamic Facial Expression Recognition Based on Geometric and Texture Features
    Li, Ming
    Wang, Zengfu
    [J]. NINTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2017), 2018, 10615
  • [28] Facial Expression Recognition Based on Local Texture and Shape Features
    Hu Min
    Teng Wendi
    Wang Xiaohua
    Xu Liangfeng
    Yang Juan
    [J]. JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2018, 40 (06) : 1338 - 1344
  • [29] Continuous Emotion Recognition in Videos by Fusing Facial Expression, Head Pose and Eye Gaze
    Wu, Suowei
    Du, Zhengyin
    Li, Weixin
    Huang, Di
    Wang, Yunhong
    [J]. ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 40 - 48
  • [30] Pose-invariant descriptor for facial emotion recognition
    Seyedehsamaneh Shojaeilangari
    Wei-Yun Yau
    Eam-Khwang Teoh
    [J]. Machine Vision and Applications, 2016, 27 : 1063 - 1070