Deep temporal clustering features for speech emotion recognition

被引:1
|
作者
Lin, Wei-Cheng [1 ]
Busso, Carlos [1 ]
机构
[1] Univ Texas Dallas, Dept Elect & Comp Engn, 800 W Campbell Rd, Richardson, TX 75080 USA
基金
美国国家科学基金会;
关键词
Deep clustering; Temporal modeling; Semi-supervised learning; Speech emotion recognition; AUTOENCODERS; CORPUS;
D O I
10.1016/j.specom.2023.103027
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Deep clustering is a popular unsupervised technique for feature representation learning. We recently proposed the chunk -based DeepEmoCluster framework for speech emotion recognition (SER) to adopt the concept of deep clustering as a novel semi -supervised learning (SSL) framework, which achieved improved recognition performances over conventional reconstruction -based approaches. However, the vanilla DeepEmoCluster lacks critical sentence -level temporal information that is useful for SER tasks. This study builds upon the DeepEmoCluster framework, creating a powerful SSL approach that leverages temporal information within a sentence. We propose two sentence -level temporal modeling alternatives using either the temporal -net or the triplet loss function, resulting in a novel temporal -enhanced DeepEmoCluster framework to capture essential temporal information. The key contribution to achieving this goal is the proposed sentence -level uniform sampling strategy, which preserves the original temporal order of the data for the clustering process. An extra network module (e.g., gated recurrent unit) is utilized for the temporal -net option to encode temporal information across the data chunks. Alternatively, we can impose additional temporal constraints by using the triplet loss function while training the DeepEmoCluster framework, which does not increase model complexity. Our experimental results based on the MSP-Podcast corpus demonstrate that the proposed temporal -enhanced framework significantly outperforms the vanilla DeepEmoCluster framework and other existing SSL approaches in regression tasks for the emotional attributes arousal, dominance, and valence. The improvements are observed in fully -supervised learning or SSL implementations. Further analyses validate the effectiveness of the proposed temporal modeling, showing (1) high temporal consistency in the cluster assignment, and (2) well -separated emotional patterns in the generated clusters.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Applying articulatory features to speech emotion recognition
    Zhou, Yu
    Sun, Yanqing
    Yang, Lin
    Yan, Yonghong
    [J]. 2009 INTERNATIONAL CONFERENCE ON RESEARCH CHALLENGES IN COMPUTER SCIENCE, ICRCCS 2009, 2009, : 73 - 76
  • [22] Speech Emotion Recognition using Combination of Features
    Zhang, Qingli
    An, Ning
    Wang, Kunxia
    Ren, Fuji
    Li, Lian
    [J]. PROCEEDINGS OF THE 2013 FOURTH INTERNATIONAL CONFERENCE ON INTELLIGENT CONTROL AND INFORMATION PROCESSING (ICICIP), 2013, : 523 - 528
  • [23] Learning Transferable Features for Speech Emotion Recognition
    Marczewski, Alison
    Veloso, Adriano
    Ziviani, Nivio
    [J]. PROCEEDINGS OF THE THEMATIC WORKSHOPS OF ACM MULTIMEDIA 2017 (THEMATIC WORKSHOPS'17), 2017, : 529 - 536
  • [24] Novel acoustic features for speech emotion recognition
    ROH Yong-Wan
    KIM Dong-Ju
    LEE Woo-Seok
    HONG Kwang-Seok
    [J]. Science China Technological Sciences, 2009, 52 (07) : 1838 - 1848
  • [25] Exploiting the potentialities of features for speech emotion recognition
    Li, Dongdong
    Zhou, Yijun
    Wang, Zhe
    Gao, Daqi
    [J]. INFORMATION SCIENCES, 2021, 548 : 328 - 343
  • [26] Significance of Phonological Features in Speech Emotion Recognition
    Wei Wang
    Paul A. Watters
    Xinyi Cao
    Lingjie Shen
    Bo Li
    [J]. International Journal of Speech Technology, 2020, 23 : 633 - 642
  • [27] Significance of Phonological Features in Speech Emotion Recognition
    Wang, Wei
    Watters, Paul A.
    Cao, Xinyi
    Shen, Lingjie
    Li, Bo
    [J]. INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2020, 23 (03) : 633 - 642
  • [28] Adding dimensional features for emotion recognition on speech
    Ben Letaifa, Leila
    Ines Torres, Maria
    Justo, Raquel
    [J]. 2020 5TH INTERNATIONAL CONFERENCE ON ADVANCED TECHNOLOGIES FOR SIGNAL AND IMAGE PROCESSING (ATSIP'2020), 2020,
  • [29] SPEECH EMOTION RECOGNITION WITH ACOUSTIC AND LEXICAL FEATURES
    Jin, Qin
    Li, Chengxin
    Chen, Shizhe
    Wu, Huimin
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP), 2015, : 4749 - 4753
  • [30] Statistical Evaluation of Speech Features for Emotion Recognition
    Iliou, Theodoros
    Anagnostopoulos, Christos-Nikolaos
    [J]. ICDT: 2009 FOURTH INTERNATIONAL CONFERENCE ON DIGITAL TELECOMMUNICATIONS, 2009, : 121 - 126