An emotion recognition method based on EWT-3D-CNN-BiLSTM-GRU-AT model

被引:1
|
作者
Celebi, Muharrem [1 ]
Ozturk, Sitki [1 ]
Kaplan, Kaplan [2 ]
机构
[1] Kocaeli Univ, Elect & Commun Engn, TR-41001 Kocaeli, Turkiye
[2] Kocaeli Univ, Software Engn, TR-41001 Kocaeli, Turkiye
关键词
Electroencephalogram; Emotion recognition; Empirical wavelet transform; Deep learning; 3-D; Convolutional neural networks; Bidirectional long-short-term memory; Gated recurrent unit; Self-attention; EEG; CNN;
D O I
10.1016/j.compbiomed.2024.107954
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
This has become a significant study area in recent years because of its use in brain -machine interaction (BMI). The robustness problem of emotion classification is one of the most basic approaches for improving the quality of emotion recognition systems. One of the two main branches of these approaches deals with the problem by extracting the features using manual engineering and the other is the famous artificial intelligence approach, which infers features of EEG data. This study proposes a novel method that considers the characteristic behavior of EEG recordings and based on the artificial intelligence method. The EEG signal is a noisy signal with a nonstationary and non-linear form. Using the Empirical Wavelet Transform (EWT) signal decomposition method, the signal's frequency components are obtained. Then, frequency -based features, linear and non-linear features are extracted. The resulting frequency -based, linear, and nonlinear features are mapped to the 2-D axis according to the positions of the EEG electrodes. By merging this 2-D images, 3-D images are constructed. In this way, the multichannel brain frequency of EEG recordings, spatial and temporal relationship are combined. Lastly, 3-D deep learning framework was constructed, which was combined with convolutional neural network (CNN), bidirectional long -short term memory (BiLSTM) and gated recurrent unit (GRU) with self -attention (AT). This model is named EWT-3D-CNN-BiLSTM-GRU-AT. As a result, we have created framework comprising handcrafted features generated and cascaded from state-of-the-art deep learning models. The framework is evaluated on the DEAP recordings based on the person -independent approach. The experimental findings demonstrate that the developed model can achieve classification accuracies of 90.57 % and 90.59 % for valence and arousal axes, respectively, for the DEAP database. Compared with existing cutting -edge emotion classification models, the proposed framework exhibits superior results for classifying human emotions.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Dynamic Music emotion recognition based on CNN-BiLSTM
    Du, Pengfei
    Li, Xiaoyong
    Gao, Yali
    [J]. PROCEEDINGS OF 2020 IEEE 5TH INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC 2020), 2020, : 1372 - 1376
  • [2] Research on EEG emotion recognition based on CNN+BiLSTM+self-attention model
    Li, Xueqing
    Li, Penghai
    Fang, Zhendong
    Cheng, Longlong
    Wang, Zhiyong
    Wang, Weijie
    [J]. OPTOELECTRONICS LETTERS, 2023, 19 (08) : 506 - 512
  • [3] Research on EEG emotion recognition based on CNN+BiLSTM+self-attention model
    Xueqing Li
    Penghai Li
    Zhendong Fang
    Longlong Cheng
    Zhiyong Wang
    Weijie Wang
    [J]. Optoelectronics Letters, 2023, 19 : 506 - 512
  • [4] Research on EEG emotion recognition based on CNN+BiLSTM+self-attention model
    LI Xueqing
    LI Penghai
    FANG Zhendong
    CHENG Longlong
    WANG Zhiyong
    WANG Weijie
    [J]. Optoelectronics Letters, 2023, 19 (08) : 506 - 512
  • [5] An ensemble 1D-CNN-LSTM-GRU model with data augmentation for speech emotion recognition
    Ahmed, Md. Rayhan
    Islam, Salekul
    Islam, A. K. M. Muzahidul
    Shatabda, Swakkhar
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 218
  • [6] Speech emotion recognition and classification using hybrid deep CNN and BiLSTM model
    Mishra, Swami
    Bhatnagar, Nehal
    Prakasam, P.
    Sureshkumar, T. R.
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (13) : 37603 - 37620
  • [7] Speech emotion recognition and classification using hybrid deep CNN and BiLSTM model
    Swami Mishra
    Nehal Bhatnagar
    Prakasam P
    Sureshkumar T. R
    [J]. Multimedia Tools and Applications, 2024, 83 : 37603 - 37620
  • [8] A 3D motion image recognition model based on 3D CNN-GRU model and attention mechanism
    Cheng, Chen
    Xu, Huahu
    [J]. IMAGE AND VISION COMPUTING, 2024, 146
  • [9] Electrodermal Activity for Emotion Recognition Using CNN and Bi-GRU Model
    Zhu, Lili
    Spachos, Petros
    Gregori, Stefano
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5533 - 5538
  • [10] Speech Emotion Recognition Model Based on Attention CNN Bi-GRU Fusing Visual Information
    Hu, Zhangfang
    Wang, Lan
    Luo, Yuan
    Xia, Yanling
    Xiao, Hang
    [J]. ENGINEERING LETTERS, 2022, 30 (02)