Bimodal Emotion Recognition Based on Speech Signals and Facial Expression

被引:0
|
作者
Tu, Binbin [1 ]
Yu, Fengqin [1 ]
机构
[1] Jiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Peoples R China
关键词
speech emotion recognition; facial expression; local Gabor binary patterns; support vector machine; fusion; LOCAL BINARY PATTERNS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Voice signals and facial expression changes are synchronized under the different emotions, the recognition algorithm based audio-visual feature fusion is proposed to identify emotional states more accurately. Prosodic features were extracted for speech emotional features, and local Gabor binary patterns were adopted for facial expression features. Two types of features were modeled with SVM respectively to obtain the probabilities of anger, disgust fear, happiness, sadness and surprise, and then fused the probabilities to gain the final decision. Simulation results demonstrate that the average recognition rates of the single modal classifier based on speech signals and based on facial expression reach 60% and 57% respectively, while the multimodal classifier with the feature fusion of speech signals and facial expression achieves 72%.
引用
收藏
页码:691 / 696
页数:6
相关论文
共 50 条
  • [41] Bimodal Emotion Recognition
    Paleari, Marco
    Chellali, Ryad
    Huet, Benoit
    [J]. SOCIAL ROBOTICS, ICSR 2010, 2010, 6414 : 305 - 314
  • [42] Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis
    Kessous, Loic
    Castellano, Ginevra
    Caridakis, George
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2010, 3 (1-2) : 33 - 48
  • [43] Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis
    Loic Kessous
    Ginevra Castellano
    George Caridakis
    [J]. Journal on Multimodal User Interfaces, 2010, 3 : 33 - 48
  • [44] A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences
    Muhammad, Farah
    Hussain, Muhammad
    Aboalsamh, Hatim
    [J]. DIAGNOSTICS, 2023, 13 (05)
  • [45] Valence-Arousal Model based Emotion Recognition using EEG, peripheral physiological signals and Facial Expression
    Zhu, Qingyang
    Lu, Guanming
    Yan, Jingjie
    [J]. ICMLSC 2020: PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, 2020, : 81 - 85
  • [46] Bimodal emotion recognition based on adaptive weights
    Huang, Lixing
    Xin, Le
    Zhao, Liyue
    Tao, Jianhua
    [J]. 2008, Press of Tsinghua University, 15 Xueyuanlu, Beijing, 100083, China (48):
  • [47] Speech emotion recognition based on emotion perception
    Gang Liu
    Shifang Cai
    Ce Wang
    [J]. EURASIP Journal on Audio, Speech, and Music Processing, 2023
  • [48] Speech emotion recognition based on emotion perception
    Liu, Gang
    Cai, Shifang
    Wang, Ce
    [J]. EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2023, 2023 (01)
  • [49] Multimodal emotion recognition for the fusion of speech and EEG signals
    Ma, Jianghe
    Sun, Ying
    Zhang, Xueying
    [J]. Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2019, 46 (01): : 143 - 150
  • [50] Emotion recognition and evaluation from Mandarin speech signals
    Pao, Tsanglong
    Chen, Yute
    Yeh, Junheng
    [J]. INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2008, 4 (07): : 1695 - 1709