Attention-Based Multiscale Spatial-Temporal Convolutional Network for Motor Imagery EEG Decoding

被引:1
|
作者
Zhang, Yu [1 ]
Li, Penghai [1 ]
Cheng, Longlong [2 ]
Li, Mingji [1 ,3 ]
Li, Hongji
机构
[1] Tianjin Univ Technol, Sch Integrated Circuit Sci & Engn, Tianjin 300191, Peoples R China
[2] China Elect Cloud Brain Tianjin Technol Co Ltd, Tianjin 300309, Peoples R China
[3] Tianjin Univ Technol, Sch Chem & Chem Engn, Tianjin 300191, Peoples R China
关键词
Feature extraction; Electroencephalography; Decoding; Brain modeling; Convolutional neural networks; Data mining; Artificial intelligence; Attention; electroencephalography (EEG); deep learning; motor imagery; multi-scale CNN; feature fusion; temporal convolution network (TCN); NEURAL-NETWORKS;
D O I
10.1109/TCE.2023.3330423
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Motor imagery (MI) electroencephalography (EEG) has been used in consumer products supported by brain-computer interfaces (BCI), with existing electronics covering a wide range of domains from artificial intelligence (AI) to the Internet of Things (IoT). However, the limitation in decoding MI-EEG signals has restricted the further development of the related Consumer Electronics (CE) industry. To address this problem, this paper proposes an attention-based multiscale spatial-temporal convolutional network (AMSTCNet). First, a multi-branch structure is designed to extract high-dimensional spatial-temporal representations at different scales. Second, Squeeze-Excite-Compress (SEC) blocks are proposed to highlight feature responses within a single scale and weighted to fuse these features to reduce information redundancy. Finally, the attention-based temporal convolutional network is used to obtain deep temporal information of the signal to dynamically fuse features at different scales. In addition, the AMSTCNet model is an end-to-end decoder using raw EEG signals as input. We evaluated the decoding performance of the AMSTCNet model using the BCI IV 2a dataset and the High Gamma dataset, and achieved recognition accuracies of 87.55% and 96.35%, respectively. Compared with existing methods, our method achieves satisfactory decoding performance and can greatly facilitate the application of BCI technology in CE.
引用
收藏
页码:2423 / 2434
页数:12
相关论文
共 50 条
  • [1] A Spatial Filter Temporal Graph Convolutional Network for decoding motor imagery EEG signals
    Tang, Xianlun
    Zhang, Jing
    Qi, Yidan
    Liu, Ke
    Li, Rui
    Wang, Huiming
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [2] A multiscale feature fusion network based on attention mechanism for motor imagery EEG decoding
    Gao, Dongrui
    Yang, Wen
    Li, Pengrui
    Liu, Shihong
    Liu, Tiejun
    Wang, Manqing
    Zhang, Yongqing
    [J]. APPLIED SOFT COMPUTING, 2024, 151
  • [3] Cascade Attention-based Spatial-temporal Convolutional Neural Network for Motion Image Posture Recognition
    Zhang, Shuqi
    [J]. Journal of Computers (Taiwan), 2022, 33 (01) : 21 - 30
  • [4] Network Traffic Prediction with Attention-based Spatial-Temporal Graph Network
    Peng, Yufei
    Guo, Yingya
    Hao, Run
    Lin, Junda
    [J]. 2023 IEEE 24TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE SWITCHING AND ROUTING, HPSR, 2023,
  • [5] Spatial-temporal attention-based convolutional network with text and numerical information for stock price prediction
    Chin-Teng Lin
    Yu-Ka Wang
    Pei-Lun Huang
    Ye Shi
    Yu-Cheng Chang
    [J]. Neural Computing and Applications, 2022, 34 : 14387 - 14395
  • [6] Spatial-temporal attention-based convolutional network with text and numerical information for stock price prediction
    Lin, Chin-Teng
    Wang, Yu-Ka
    Huang, Pei-Lun
    Shi, Ye
    Chang, Yu-Cheng
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (17): : 14387 - 14395
  • [7] Attention-based global and local spatial-temporal graph convolutional network for vehicle emission prediction
    Fei, Xihong
    Ling, Qiang
    [J]. NEUROCOMPUTING, 2023, 521 : 41 - 55
  • [8] Network traffic prediction with Attention-based Spatial-Temporal Graph Network
    Peng, Yufei
    Guo, Yingya
    Hao, Run
    Xu, Chengzhe
    [J]. COMPUTER NETWORKS, 2024, 243
  • [9] EISATC-Fusion: Inception Self-Attention Temporal Convolutional Network Fusion for Motor Imagery EEG Decoding
    Liang, Guangjin
    Cao, Dianguo
    Wang, Jinqiang
    Zhang, Zhongcai
    Wu, Yuqiang
    [J]. IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2024, 32 : 1535 - 1545
  • [10] Bidirectional feature pyramid attention-based temporal convolutional network model for motor imagery electroencephalogram classification
    Xie, Xinghe
    Chen, Liyan
    Qin, Shujia
    Zha, Fusheng
    Fan, Xinggang
    [J]. FRONTIERS IN NEUROROBOTICS, 2024, 18