Emotion Recognition from Speech Signal in Multilingual

被引:2
|
作者
Albu, Corina [1 ]
Lupu, Eugen [1 ]
Arsinte, Radu [1 ]
机构
[1] Tech Univ Cluj Napoca, Commun Dept, 26-28 Baritiu Str, Cluj Napoca, Romania
关键词
Speech emotion recognition; Affective computing; Features extraction; Weka; Emotional databases; FEATURES;
D O I
10.1007/978-981-13-6207-1_25
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Emotion recognition from speech signal has become more and more important in advanced human-machine applications. The detailed description of emotions and their detection play an important role in the psychiatric studies but also in other fields of medicine such as anamnesis, clinical studies or lie detection. In this paper some experiments using multilingual emotional databases are presented. For the features extracted from the speech material, the LPC (Linear predictive coding), LPCC (Linear Predictive Cepstral Coefficients) and MFCC (Mel Frequency Cepstral Coefficients) coefficients are employed. The Weka tool was used for the classification task, selecting the k-NN (k-nearest neighbors) and SVM (Support Vector Machine) classifiers. The results for the selected features vectors show that the emotion recognition rate is satisfactory when multilingual speech material is used for training and testing. When the training is made using emotional materials for a language and testing with materials in other language the results are poor. Therefore, this shows that the features extracted from speech display a closed dependency with the spoken language.
引用
收藏
页码:157 / 161
页数:5
相关论文
共 50 条
  • [1] Emotion Recognition from Speech Signal
    Ramdinmawii, Esther
    Mohanta, Abhijit
    Mittal, Vinay Kumar
    [J]. TENCON 2017 - 2017 IEEE REGION 10 CONFERENCE, 2017, : 1562 - 1567
  • [2] Separability and recognition of emotion states in multilingual speech
    Jiang, XQ
    Tian, L
    Han, M
    [J]. 2005 INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CIRCUITS AND SYSTEMS, VOLS 1 AND 2, PROCEEDINGS: VOL 1: COMMUNICATION THEORY AND SYSTEMS, 2005, : 861 - 864
  • [3] Integrating Language and Emotion Features for Multilingual Speech Emotion Recognition
    Heracleous, Panikos
    Mohammad, Yasser
    Yoneyama, Akio
    [J]. HUMAN-COMPUTER INTERACTION. MULTIMODAL AND NATURAL INTERACTION, HCI 2020, PT II, 2020, 12182 : 187 - 196
  • [4] Context-Independent Multilingual Emotion Recognition from Speech Signals
    Vladimir Hozjan
    Zdravko Kačič
    [J]. International Journal of Speech Technology, 2003, 6 (3) : 311 - 320
  • [5] Emotion recognition and acoustic analysis from speech signal
    Park, CH
    Sim, KB
    [J]. PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 2594 - 2598
  • [6] Emotion recognition from the facial image and speech signal
    Go, HJ
    Kwak, KC
    Lee, DJ
    Chun, MG
    [J]. SICE 2003 ANNUAL CONFERENCE, VOLS 1-3, 2003, : 2890 - 2895
  • [7] Enhancing multilingual recognition of emotion in speech by language identification
    Sagha, Hesam
    Matejka, Pavel
    Gavryukova, Maryna
    Povolny, Filip
    Marchi, Erik
    Schuller, Bjoern
    [J]. 17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 2949 - 2953
  • [8] Multilingual Emotion Analysis from Speech
    Rani, Poonam
    Tripathi, Astha
    Shoaib, Mohd
    Yadav, Sourabh
    Yadav, Mohit
    [J]. INTERNATIONAL CONFERENCE ON INNOVATIVE COMPUTING AND COMMUNICATIONS, ICICC 2022, VOL 3, 2023, 492 : 443 - 456
  • [9] Automatic emotion recognition by the speech signal
    Schuller, B
    Lang, M
    Rigoll, G
    [J]. 6TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL IX, PROCEEDINGS: IMAGE, ACOUSTIC, SPEECH AND SIGNAL PROCESSING II, 2002, : 367 - 372
  • [10] Emotion recognition from speech signal using fuzzy clustering
    Rovetta, Stefano
    Mnasri, Zied
    Masulli, Francesco
    Cabri, Alberto
    [J]. PROCEEDINGS OF THE 11TH CONFERENCE OF THE EUROPEAN SOCIETY FOR FUZZY LOGIC AND TECHNOLOGY (EUSFLAT 2019), 2019, 1 : 120 - 127