Emotion Classification Based on Transformer and CNN for EEG Spatial-Temporal Feature Learning

被引:3
|
作者
Yao, Xiuzhen [1 ,2 ]
Li, Tianwen [2 ,3 ]
Ding, Peng [1 ,2 ]
Wang, Fan [1 ,2 ]
Zhao, Lei [2 ,3 ]
Gong, Anmin [4 ]
Nan, Wenya [5 ]
Fu, Yunfa [1 ,2 ]
机构
[1] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming 650500, Peoples R China
[2] Kunming Univ Sci & Technol, Brain Cognit & Brain Comp Intelligence Integrat Gr, Kunming 650500, Peoples R China
[3] Kunming Univ Sci & Technol, Fac Sci, Kunming 650500, Peoples R China
[4] Chinese Peoples Armed Police Force Engn Univ, Sch Informat Engn, Xian 710086, Peoples R China
[5] Shanghai Normal Univ, Coll Educ, Dept Psychol, Shanghai 200234, Peoples R China
基金
中国国家自然科学基金;
关键词
EEG; emotion classification; transformer; CNN; multi-head attention;
D O I
10.3390/brainsci14030268
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Objectives: The temporal and spatial information of electroencephalogram (EEG) signals is crucial for recognizing features in emotion classification models, but it excessively relies on manual feature extraction. The transformer model has the capability of performing automatic feature extraction; however, its potential has not been fully explored in the classification of emotion-related EEG signals. To address these challenges, the present study proposes a novel model based on transformer and convolutional neural networks (TCNN) for EEG spatial-temporal (EEG ST) feature learning to automatic emotion classification. Methods: The proposed EEG ST-TCNN model utilizes position encoding (PE) and multi-head attention to perceive channel positions and timing information in EEG signals. Two parallel transformer encoders in the model are used to extract spatial and temporal features from emotion-related EEG signals, and a CNN is used to aggregate the EEG's spatial and temporal features, which are subsequently classified using Softmax. Results: The proposed EEG ST-TCNN model achieved an accuracy of 96.67% on the SEED dataset and accuracies of 95.73%, 96.95%, and 96.34% for the arousal-valence, arousal, and valence dimensions, respectively, for the DEAP dataset. Conclusions: The results demonstrate the effectiveness of the proposed ST-TCNN model, with superior performance in emotion classification compared to recent relevant studies. Significance: The proposed EEG ST-TCNN model has the potential to be used for EEG-based automatic emotion recognition.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] An approach to quantifying the multi-channel EEG spatial-temporal feature
    Lo, PC
    Chung, WP
    BIOMETRICAL JOURNAL, 2000, 42 (07) : 901 - 914
  • [32] Learning a spatial-temporal texture transformer network for video inpainting
    Ma, Pengsen
    Xue, Tao
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [33] Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
    Emsawas, Taweesak
    Morita, Takashi
    Kimura, Tsukasa
    Fukui, Ken-ichi
    Numao, Masayuki
    SENSORS, 2022, 22 (21)
  • [34] Subject-independent emotion recognition of EEG signals using graph attention-based spatial-temporal pattern learning
    Zhu, Yiwen
    Guo, Yeshuang
    Zhu, Wenzhe
    Di, Lare
    Yin, Thong
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 7070 - 7075
  • [35] Spatial-temporal network for fine-grained-level emotion EEG recognition
    Ji, Youshuo
    Li, Fu
    Fu, Boxun
    Li, Yang
    Zhou, Yijin
    Niu, Yi
    Zhang, Lijian
    Chen, Yuanfang
    Shi, Guangming
    JOURNAL OF NEURAL ENGINEERING, 2022, 19 (03)
  • [36] RANDOM-SAMPLING-BASED SPATIAL-TEMPORAL FEATURE FOR CONSUMER VIDEO CONCEPT CLASSIFICATION
    Wei, Anjun
    Pei, Yuru
    Zha, Hongbin
    2012 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2012), 2012, : 1861 - 1864
  • [37] EEG-Based Emotion Recognition Using Spatial-Temporal Graph Convolutional LSTM With Attention Mechanism
    Feng, Lin
    Cheng, Cheng
    Zhao, Mingyan
    Deng, Huiyuan
    Zhang, Yong
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (11) : 5406 - 5417
  • [38] EEG-based Emotion Recognition Using Spatial-Temporal Representation via Bi-GRU
    Lew, Wai-Cheong Lincoln
    Wang, Di
    Shylouskaya, Katsiaryna
    Zhang, Zhuo
    Lim, Joo-Hwee
    Ang, Kai Keng
    Tan, Ah-Hwee
    42ND ANNUAL INTERNATIONAL CONFERENCES OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY: ENABLING INNOVATIVE TECHNOLOGIES FOR GLOBAL HEALTHCARE EMBC'20, 2020, : 116 - 119
  • [39] Fast Spatial-Temporal Transformer Network
    Escher, Rafael Molossi
    de Bem, Rodrigo Andrade
    Jorge Drews Jr, Paulo Lilles
    2021 34TH SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI 2021), 2021, : 65 - 72
  • [40] Violent video classification based on spatial-temporal cues using deep learning
    Xu, Xingyu
    Wu, Xiaoyu
    Wang, Ge
    Wang, Huimin
    2018 11TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 1, 2018, : 319 - 322