Sparse Kernel Reduced-Rank Regression for Bimodal Emotion Recognition From Facial Expression and Speech

被引:71
|
作者
Yan, Jingjie [1 ]
Zheng, Wenming [3 ]
Xu, Qinyu [1 ]
Lu, Guanming [1 ]
Li, Haibo [1 ,2 ]
Wang, Bei [3 ]
机构
[1] Nanjing Univ Posts & Telecommun, Jiangsu Prov Key Lab Image Proc & Image Commun, Coll Telecomm & Informat Engn, Nanjing 210003, Peoples R China
[2] Royal Inst Technol, Sch Comp Sci & Commun, S-11428 Stockholm, Sweden
[3] Southeast Univ, Key Lab Child Dev & Learning Sci, Minist Educ, Res Ctr Learning Sci, Nanjing 210096, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Bimodal emotion recognition; facial expression; feature fusion; sparse kernel reduced-rank regression (SKRRR); speech; PHENOTYPES; FRAMEWORK; FUSION; FACE;
D O I
10.1109/TMM.2016.2557721
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A novel bimodal emotion recognition approach from facial expression and speech based on the sparse kernel reduced-rank regression (SKRRR) fusion method is proposed in this paper. In this method, we use the openSMILE feature extractor and the scale invariant feature transform feature descriptor to respectively extract effective features from speech modality and facial expression modality, and then propose the SKRRR fusion approach to fuse the emotion features of two modalities. The proposed SKRRR method is a nonlinear extension of the traditional reduced-rank regression (RRR), where both predictor and response feature vectors in RRR are kernelized by being mapped onto two high-dimensional feature space via two nonlinear mappings, respectively. To solve the SKRRR problem, we propose a sparse representation (SR)-based approach to find the optimal solution of the coefficient matrices of SKRRR, where the introduction of the SR technique aims to fully consider the different contributions of training data samples to the derivation of optimal solution of SKRRR. Finally, we utilize the eNTERFACE '05 and AFEW4.0 bimodal emotion database to conduct the experiments of monomodal emotion recognition and bimodal emotion recognition, and the results indicate that our presented approach acquires the highest or comparable bimodal emotion recognition rate among some state-of-the-art approaches.
引用
收藏
页码:1319 / 1329
页数:11
相关论文
共 50 条
  • [1] Speech Emotion Recognition Based on Kernel Reduced-rank Regression
    Zheng, Wenming
    Zhou, Xiaoyan
    [J]. 2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 1972 - 1976
  • [2] Multi-View Facial Expression Recognition Based on Group Sparse Reduced-Rank Regression
    Zheng, Wenming
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (01) : 71 - 85
  • [3] Sparse reduced-rank regression with covariance estimation
    Lisha Chen
    Jianhua Z. Huang
    [J]. Statistics and Computing, 2016, 26 : 461 - 470
  • [4] Fast Algorithms for Sparse Reduced-Rank Regression
    Dubois, Benjamin
    Delmas, Jean-Francois
    Obozinski, Guillaume
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [5] Sparse reduced-rank regression with covariance estimation
    Chen, Lisha
    Huang, Jianhua Z.
    [J]. STATISTICS AND COMPUTING, 2016, 26 (1-2) : 461 - 470
  • [6] Bimodal Emotion Recognition Based on Speech Signals and Facial Expression
    Tu, Binbin
    Yu, Fengqin
    [J]. FOUNDATIONS OF INTELLIGENT SYSTEMS (ISKE 2011), 2011, 122 : 691 - 696
  • [7] Bilinear Kernel Reduced Rank Regression for Facial Expression Synthesis
    Huang, Dong
    De la Torre, Fernando
    [J]. COMPUTER VISION-ECCV 2010, PT II, 2010, 6312 : 364 - 377
  • [8] Efficient Sparse Reduced-Rank Regression With Covariance Estimation
    Li, Fengpei
    Zhao, Ziping
    [J]. 2023 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP, SSP, 2023, : 46 - 50
  • [9] Robust Sparse Reduced-Rank Regression with Response Dependency
    Liu, Wenchen
    Liu, Guanfu
    Tang, Yincai
    [J]. SYMMETRY-BASEL, 2022, 14 (08):
  • [10] Sparse reduced-rank regression for integrating omics data
    Hilafu, Haileab
    Safo, Sandra E.
    Haine, Lillian
    [J]. BMC BIOINFORMATICS, 2020, 21 (01)