Feature Selection Based Transfer Subspace Learning for Speech Emotion Recognition

被引:45
|
作者
Song, Peng [1 ]
Zheng, Wenming [2 ]
机构
[1] Yantai Univ, Sch Comp & Control Engn, Yantai 264005, Peoples R China
[2] Southeast Univ, Res Ctr Learning Sci, Minist Educ, Key Lab Child Dev & Learning Sci, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature selection; transfer learning; subspace learning; speech emotion recognition; FRAMEWORK;
D O I
10.1109/TAFFC.2018.2800046
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-corpus speech emotion recognition has recently received considerable attention due to the widespread existence of various emotional speech. It takes one corpus as the training data aiming to recognize emotions of another corpus, and generally involves two basic problems, i.e., feature matching and feature selection. Many previous works study these two problems independently, or just focus on solving the first problem. In this paper, we propose a novel algorithm, called feature selection based transfer subspace learning (FSTSL), to address these two problems. To deal with the first problem, a latent common subspace is learnt by reducing the difference of different corpora and preserving the important properties. Meanwhile, we adopt the l(2,1)-norm on the projection matrix to deal with the second problem. Besides, to guarantee the subspace to be robust and discriminative, the geometric information of data is exploited simultaneously in the proposed FSTSL framework. Empirical experiments on cross-corpus speech emotion recognition tasks demonstrate that our proposed method can achieve encouraging results in comparison with state-of-the-art algorithms.
引用
收藏
页码:373 / 382
页数:10
相关论文
共 50 条
  • [1] Joint subspace learning and feature selection method for speech emotion recognition
    基于子空间学习和特征选择融合的语音情感识别
    [J]. 2018, Tsinghua University (58):
  • [2] Diversity subspace generation based on feature selection for speech emotion recognition
    Qing Ye
    Yaxin Sun
    [J]. Multimedia Tools and Applications, 2024, 83 : 23533 - 23561
  • [3] Diversity subspace generation based on feature selection for speech emotion recognition
    Ye, Qing
    Sun, Yaxin
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (8) : 23533 - 23561
  • [4] Speech Emotion Recognition Based on Transfer Emotion-Discriminative Features Subspace Learning
    Zhang, Kexin
    Liu, Yunxiang
    [J]. IEEE ACCESS, 2023, 11 : 56336 - 56343
  • [5] Linked Source and Target Domain Subspace Feature Transfer Learning - Exemplified by Speech Emotion Recognition
    Deng, Jun
    Zhang, Zixing
    Schuller, Bjoern
    [J]. 2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 761 - 766
  • [6] Cross-Corpus Speech Emotion Recognition Based on Sparse Subspace Transfer Learning
    Zhao, Keke
    Song, Peng
    Zhang, Wenjing
    Zhang, Weijian
    Li, Shaokai
    Chen, Dongliang
    Zheng, Wenming
    [J]. BIOMETRIC RECOGNITION (CCBR 2021), 2021, 12878 : 466 - 473
  • [7] Sparse Autoencoder-based Feature Transfer Learning for Speech Emotion Recognition
    Deng, Jun
    Zhang, Zixing
    Marchi, Erik
    Schuller, Bjoern
    [J]. 2013 HUMAINE ASSOCIATION CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2013, : 511 - 516
  • [8] Cross-Corpus Speech Emotion Recognition Based on Joint Transfer Subspace Learning and Regression
    Zhang, Weijian
    Song, Peng
    Chen, Dongliang
    Sheng, Chao
    Zhang, Wenjing
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (02) : 588 - 598
  • [9] Speech emotion recognition based on feature selection and extreme learning machine decision tree
    Liu, Zhen-Tao
    Wu, Min
    Cao, Wei-Hua
    Mao, Jun-Wei
    Xu, Jian-Ping
    Tan, Guan-Zheng
    [J]. NEUROCOMPUTING, 2018, 273 : 271 - 280
  • [10] Speech Emotion Recognition using Feature Selection with Adaptive Structure Learning
    Rayaluru, Akshay
    Bandela, Surekha Reddy
    Kumar, T. Kishore
    [J]. 2019 IEEE INTERNATIONAL SYMPOSIUM ON SMART ELECTRONIC SYSTEMS (ISES 2019), 2019, : 233 - 236