Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages

被引:1
|
作者
Retta, Ephrem Afele [1 ]
Sutcliffe, Richard [2 ]
Mahmood, Jabar [3 ,4 ]
Berwo, Michael Abebe [4 ]
Almekhlafi, Eiad [1 ]
Khan, Sajjad Ahmad [5 ]
Chaudhry, Shehzad Ashraf [6 ,7 ]
Mhamed, Mustafa [1 ,8 ]
Feng, Jun [1 ]
机构
[1] Northwest Univ, Sch Informat Sci & Technol, Xian 710127, Peoples R China
[2] Univ Essex, Sch Comp Sci & Elect Engn, Wivenhoe Pk, Colchester CO4 3SQ, England
[3] Univ Sialkot, Fac Comp & Informat Technol, Sialkot 51040, Punjab, Pakistan
[4] Changan Univ, Sch Informat & Engn, Xian 710064, Peoples R China
[5] Hoseo Univ, Comp Engn Dept, Asan 31499, South Korea
[6] Abu Dhabi Univ, Coll Engn, Dept Comp Sci & Informat Technol, Abu Dhabi 59911, U Arab Emirates
[7] Nisantasi Univ, Fac Engn & Architecture, Dept Software Engn, TR-34398 Istanbul, Turkiye
[8] China Agr Univ, Coll Informat & Elect Engn, Beijing 100083, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 23期
关键词
speech emotion recognition; multilingual; cross-lingual; feature extraction;
D O I
10.3390/app132312587
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In a conventional speech emotion recognition (SER) task, a classifier for a given language is trained on a pre-existing dataset for that same language. However, where training data for a language do not exist, data from other languages can be used instead. We experiment with cross-lingual and multilingual SER, working with Amharic, English, German, and Urdu. For Amharic, we use our own publicly available Amharic Speech Emotion Dataset (ASED). For English, German and Urdu, we use the existing RAVDESS, EMO-DB, and URDU datasets. We followed previous research in mapping labels for all of the datasets to just two classes: positive and negative. Thus, we can compare performance on different languages directly and combine languages for training and testing. In Experiment 1, monolingual SER trials were carried out using three classifiers, AlexNet, VGGE (a proposed variant of VGG), and ResNet50. The results, averaged for the three models, were very similar for ASED and RAVDESS, suggesting that Amharic and English SER are equally difficult. Similarly, German SER is more difficult, and Urdu SER is easier. In Experiment 2, we trained on one language and tested on another, in both directions for each of the following pairs: Amharic <-> German, Amharic <-> English, and Amharic <-> Urdu. The results with Amharic as the target suggested that using English or German as the source gives the best result. In Experiment 3, we trained on several non-Amharic languages and then tested on Amharic. The best accuracy obtained was several percentage points greater than the best accuracy in Experiment 2, suggesting that a better result can be obtained when using two or three non-Amharic languages for training than when using just one non-Amharic language. Overall, the results suggest that cross-lingual and multilingual training can be an effective strategy for training an SER classifier when resources for a language are scarce.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Multi-scale discrepancy adversarial network for cross-corpus speech emotion recognition
    Wanlu ZHENG
    Wenming ZHENG
    Yuan ZONG
    虚拟现实与智能硬件(中英文), 2021, 3 (01) : 65 - 75
  • [42] Low-rank joint distribution adaptation for cross-corpus speech emotion recognition
    Li, Sunan
    Lu, Cheng
    Zhao, Yan
    Lian, Hailun
    Qi, Tianhua
    Zong, Yuan
    KNOWLEDGE-BASED SYSTEMS, 2025, 315
  • [43] Improving Cross-Corpus Speech Emotion Recognition with Adversarial Discriminative Domain Generalization (ADDoG)
    Gideon, John
    McInnis, Melvin G.
    Provost, Emily Mower
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2021, 12 (04) : 1055 - 1068
  • [44] CROSS-CORPUS EEG-BASED EMOTION RECOGNITION
    Rayatdoost, Soheil
    Soleymani, Mohammad
    2018 IEEE 28TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2018,
  • [45] Self Supervised Adversarial Domain Adaptation for Cross-Corpus and Cross-Language Speech Emotion Recognition
    Latif, Siddique
    Rana, Rajib
    Khalifa, Sara
    Jurdak, Raja
    Schuller, Bjorn
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 1912 - 1926
  • [46] Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition
    Zhao, Yan
    Zong, Yuan
    Wang, Jincen
    Lian, Hailun
    Lu, Cheng
    Zhao, Li
    Zheng, Wenming
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (04) : 5419 - 5430
  • [47] Cross-corpus speech emotion recognition using semi-supervised domain adaptation network
    Zhang, Yumei
    Jia, Maoshen
    Cao, Xuan
    Ru, Jiawei
    Zhang, Xinfeng
    SPEECH COMMUNICATION, 2025, 168
  • [48] Accuracy of Automatic Cross-Corpus Emotion Labeling for Conversational Speech Corpus Commonization
    Mori, Hiroki
    Nagaoka, Atsushi
    Arimoto, Yoshiko
    LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2016, : 4019 - 4023
  • [49] MBDA: A Multi-scale Bidirectional Perception Approach for Cross-Corpus Speech Emotion Recognition
    Li, Jiayang
    Wang, Xiaoye
    Li, Siyuan
    Shi, Jia
    Xiao, Yingyuan
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT III, ICIC 2024, 2024, 14877 : 329 - 341
  • [50] Fusing Visual Attention CNN and Bag of Visual Words for Cross-Corpus Speech Emotion Recognition
    Seo, Minji
    Kim, Myungho
    SENSORS, 2020, 20 (19) : 1 - 21