Decoding Visual Motions from EEG Using Attention-Based RNN

被引:8
|
作者
Yang, Dongxu [1 ]
Liu, Yadong [1 ]
Zhou, Zongtan [1 ]
Yu, Yang [1 ]
Liang, Xinbin [1 ]
机构
[1] Natl Univ Def Technol, Coll Intelligence Sci & Technol, Changsha 410073, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2020年 / 10卷 / 16期
基金
中国国家自然科学基金;
关键词
electroencephalography; attention mechanisms; recurrent neural networks; data augmentation; brain-computer interface; visual motion perception; NEURAL-NETWORKS;
D O I
10.3390/app10165662
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The main objective of this paper is to use deep neural networks to decode the electroencephalography (EEG) signals evoked when individuals perceive four types of motion stimuli (contraction, expansion, rotation, and translation). Methods for single-trial and multi-trial EEG classification are both investigated in this study. Attention mechanisms and a variant of recurrent neural networks (RNNs) are incorporated as the decoding model. Attention mechanisms emphasize task-related responses and reduce redundant information of EEG, whereas RNN learns feature representations for classification from the processed EEG data. To promote generalization of the decoding model, a novel online data augmentation method that randomly averages EEG sequences to generate artificial signals is proposed for single-trial EEG. For our dataset, the data augmentation method improves the accuracy of our model (based on RNN) and two benchmark models (based on convolutional neural networks) by 5.60%, 3.92%, and 3.02%, respectively. The attention-based RNN reaches mean accuracies of 67.18% for single-trial EEG decoding with data augmentation. When performing multi-trial EEG classification, the amount of training data decreases linearly after averaging, which may result in poor generalization. To address this deficiency, we devised three schemes to randomly combine data for network training. Accordingly, the results indicate that the proposed strategies effectively prevent overfitting and improve the correct classification rate compared with averaging EEG fixedly (by up to 19.20%). The highest accuracy of the three strategies for multi-trial EEG classification achieves 82.92%. The decoding performance for the methods proposed in this work indicates they have application potential in the brain-computer interface (BCI) system based on visual motion perception.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Relation Classification Using Segment-Level Attention-based CNN and Dependency-based RNN
    Tran, Van-Hien
    Phi, Van-Thuy
    Shindo, Hiroyuki
    Matsumoto, Yuji
    [J]. 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 2793 - 2798
  • [22] Visual Attention-Based Image Watermarking
    Bhowmik, Deepayan
    Oakes, Matthew
    Abhayaratne, Charith
    [J]. IEEE ACCESS, 2016, 4 : 8002 - 8018
  • [23] Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN
    Huddar, Mahesh G.
    Sannakki, Sanjeev S.
    Rajpurohit, Vijay S.
    [J]. INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2021, 6 (06): : 112 - 121
  • [24] Phishing URL Detection via CNN and Attention-Based Hierarchical RNN
    Huang, Yongjie
    Yang, Qiping
    Qin, Jinghui
    Wen, Wushao
    [J]. 2019 18TH IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS/13TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA SCIENCE AND ENGINEERING (TRUSTCOM/BIGDATASE 2019), 2019, : 112 - 119
  • [25] Attention-Based CNN-RNN Arabic Text Recognition from Natural Scene Images
    Butt, Hanan
    Raza, Muhammad Raheel
    Ramzan, Muhammad Javed
    Ali, Muhammad Junaid
    Haris, Muhammad
    [J]. FORECASTING, 2021, 3 (03): : 520 - 540
  • [26] Decoding auditory attention from EEG using a convolutional neural network
    An, Winko W.
    Pei, Alexander
    Noyce, Abigail L.
    Shinn-Cunningham, Barbara
    [J]. 2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 6586 - 6589
  • [27] DECODING AUDITORY ATTENTION FROM EEG DATA USING CEPSTRAL ANALYSIS
    Alickovic, Emina
    Mendoza, Carlos Francisco
    Segar, Andrew
    Sandsten, Maria
    Skoglund, Martin A.
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [28] Decoding Visual Recognition of Objects from EEG Signals based on Attention-Driven Convolutional Neural Network
    Kalafatovich, Jenifer
    Lee, Minji
    Lee, Seong-Whan
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 2985 - 2990
  • [29] Attention-Based Parallel Multiscale Convolutional Neural Network for Visual Evoked Potentials EEG Classification
    Gao, Zhongke
    Sun, Xinlin
    Liu, Mingxu
    Dang, Weidong
    Ma, Chao
    Chen, Guanrong
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (08) : 2887 - 2894
  • [30] EEG emotion recognition using attention-based convolutional transformer neural network
    Gong, Linlin
    Li, Mingyang
    Zhang, Tao
    Chen, Wanzhong
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 84