MaxMViT-MLP: Multiaxis and Multiscale Vision Transformers Fusion Network for Speech Emotion Recognition

被引:1
|
作者
Ong, Kah Liang [1 ]
Lee, Chin Poo [1 ]
Lim, Heng Siong [2 ]
Lim, Kian Ming [1 ]
Alqahtani, Ali [3 ,4 ]
机构
[1] Multimedia Univ, Fac Informat Sci & Technol, Melaka 75450, Malaysia
[2] Multimedia Univ, Fac Engn & Technol, Melaka 75450, Malaysia
[3] King Khalid Univ, Dept Comp Sci, Abha 61421, Saudi Arabia
[4] King Khalid Univ, Ctr Artificial Intelligence CAI, Abha 61421, Saudi Arabia
关键词
Speech recognition; Emotion recognition; Spectrogram; Feature extraction; Support vector machines; Transformers; Mel frequency cepstral coefficient; Ensemble learning; Visualization; Speech emotion recognition; ensemble learning; spectrogram; vision transformer; Emo-DB; RAVDESS; IEMOCAP;
D O I
10.1109/ACCESS.2024.3360483
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vision Transformers, known for their innovative architectural design and modeling capabilities, have gained significant attention in computer vision. This paper presents a dual-path approach that leverages the strengths of the Multi-Axis Vision Transformer (MaxViT) and the Improved Multiscale Vision Transformer (MViTv2). It starts by encoding speech signals into Constant-Q Transform (CQT) spectrograms and Mel Spectrograms with Short-Time Fourier Transform (Mel-STFT). The CQT spectrogram is then fed into the MaxViT model, while the Mel-STFT is input to the MViTv2 model to extract informative features from the spectrograms. These features are integrated and passed into a Multilayer Perceptron (MLP) model for final classification. This hybrid model is named the "MaxViT and MViTv2 Fusion Network with Multilayer Perceptron (MaxMViT-MLP)." The MaxMViT-MLP model achieves remarkable results with an accuracy of 95.28% on the Emo-DB, 89.12% on the RAVDESS dataset, and 68.39% on the IEMOCAP dataset, substantiating the advantages of integrating multiple audio feature representations and Vision Transformers in speech emotion recognition.
引用
收藏
页码:18237 / 18250
页数:14
相关论文
共 50 条
  • [31] Speech Emotion Recognition with Hybrid Neural Network
    Wei, Chuanzheng
    Sun, Xiao
    Tian, Fang
    Ren, Fuji
    5TH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING AND COMMUNICATIONS (BIGCOM 2019), 2019, : 298 - 302
  • [32] Feature fusion: research on emotion recognition in English speech
    Yang Y.
    International Journal of Speech Technology, 2024, 27 (02) : 319 - 327
  • [33] Graph Isomorphism Network for Speech Emotion Recognition
    Liu, Jiawang
    Wang, Haoxiang
    INTERSPEECH 2021, 2021, : 3405 - 3409
  • [34] A Vision Enhancement and Feature Fusion Multiscale Detection Network
    Chengwu Qian
    Jiangbo Qian
    Chong Wang
    Xulun Ye
    Caiming Zhong
    Neural Processing Letters, 56
  • [35] A Vision Enhancement and Feature Fusion Multiscale Detection Network
    Qian, Chengwu
    Qian, Jiangbo
    Wang, Chong
    Ye, Xulun
    Zhong, Caiming
    NEURAL PROCESSING LETTERS, 2024, 56 (01)
  • [36] An arabic visual speech recognition framework with CNN and vision transformers for lipreading
    Baaloul, Ali
    Benblidia, Nadjia
    Reguieg, Fatma Zohra
    Bouakkaz, Mustapha
    Felouat, Hisham
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (27) : 69989 - 70023
  • [37] Speech emotion recognition via multiple fusion under spatial-temporal parallel network
    Gan, Chenquan
    Wang, Kexin
    Zhu, Qingyi
    Xiang, Yong
    Jain, Deepak Kumar
    Garcia, Salvador
    NEUROCOMPUTING, 2023, 555
  • [38] Deep and shallow features fusion based on deep convolutional neural network for speech emotion recognition
    Sun L.
    Chen J.
    Xie K.
    Gu T.
    International Journal of Speech Technology, 2018, 21 (04) : 931 - 940
  • [39] Spontaneous Speech Emotion Recognition Using Multiscale Deep Convolutional LSTM
    Zhang, Shiqing
    Zhao, Xiaoming
    Tian, Qi
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (02) : 680 - 688
  • [40] Speech emotion recognition using the novel PEmoNet (Parallel Emotion Network)
    Bhangale, Kishor B.
    Kothandaraman, Mohanaprasad
    APPLIED ACOUSTICS, 2023, 212