Multimodal Classification with Deep Convolutional-Recurrent Neural Networks for Electroencephalography

被引:24
|
作者
Tan, Chuanqi [1 ]
Sun, Fuchun [1 ]
Zhang, Wenchang [1 ]
Chen, Jianhua [1 ]
Liu, Chunfang [1 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Tsinghua Natl Lab Informat Sci & Technol TNList, State Key Lab Intelligent Technol & Syst, Beijing, Peoples R China
关键词
Multimodal; EEG classification; Optical flow; Deep learning; CNN; RNN; EEG;
D O I
10.1007/978-3-319-70096-0_78
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Electroencephalography (EEG) has become the most significant input signal for brain computer interface (BCI) based systems. However, it is very difficult to obtain satisfactory classification accuracy due to traditional methods can not fully exploit multimodal information. Herein, we propose a novel approach to modeling cognitive events from EEG data by reducing it to a video classification problem, which is designed to preserve the multimodal information of EEG. In addition, optical flow is introduced to represent the variant information of EEG. We train a deep neural network (DNN) with convolutional neural network (CNN) and recurrent neural network (RNN) for the EEG classification task by using EEG video and optical flow. The experiments demonstrate that our approach has many advantages, such as more robustness and more accuracy in EEG classification tasks. According to our approach, we designed a mixed BCI-based rehabilitation support system to help stroke patients perform some basic operations.
引用
收藏
页码:767 / 776
页数:10
相关论文
共 50 条
  • [1] CONVOLUTIONAL-RECURRENT NEURAL NETWORKS FOR SPEECH ENHANCEMENT
    Zhao, Han
    Zarar, Shuayb
    Tashev, Ivan
    Lee, Chin-Hui
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 2401 - 2405
  • [2] A Deep Convolutional-Recurrent Neural Network Architecture for Parkinson's Disease EEG Classification
    Lee, Soojin
    Hussein, Ramy
    McKeown, Martin J.
    [J]. 2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [3] Combining Very Deep Convolutional Neural Networks and Recurrent Neural Networks for Video Classification
    Kiziltepe, Rukiye Savran
    Gan, John Q.
    Escobar, Juan Jose
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2019, PT II, 2019, 11507 : 811 - 822
  • [4] A Sketch Recognition Method Based on Deep Convolutional-Recurrent Neural Network
    [J]. 2018, Institute of Computing Technology (30):
  • [5] Multiple attention convolutional-recurrent neural networks for speech emotion recognition
    Zhang, Zhihao
    Wang, Kunxia
    [J]. 2022 10TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS, ACIIW, 2022,
  • [6] EEG Emotion Recognition using Parallel Hybrid Convolutional-Recurrent Neural Networks
    Putri, Nursilva Aulianisa
    Djamal, Esmeralda Contessa
    Nugraha, Fikri
    Kasyidi, Fatan
    [J]. 2022 INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ITS APPLICATIONS (ICODSA), 2022, : 24 - 29
  • [7] Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
    Ordonez, Francisco Javier
    Roggen, Daniel
    [J]. SENSORS, 2016, 16 (01)
  • [8] Electroencephalography Image Classification Using Convolutional Neural Networks
    Galety, Mohammad Gouse
    Al-Mukhtar, Firas
    Rofoo, Fanar
    Sriharsha, A., V
    Maaroof, Rebaz
    [J]. PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON INNOVATIONS IN COMPUTING RESEARCH (ICR'22), 2022, 1431 : 42 - 52
  • [9] Recurrent and convolutional neural networks for deep terrain classification by autonomous robots
    Vulpi, Fabio
    Milella, Annalisa
    Marani, Roberto
    Reina, Giulio
    [J]. JOURNAL OF TERRAMECHANICS, 2021, 96 : 119 - 131
  • [10] Convolutional-Recurrent Neural Networks With Multiple Attention Mechanisms for Speech Emotion Recognition
    Jiang, Pengxu
    Xu, Xinzhou
    Tao, Huawei
    Zhao, Li
    Zou, Cairong
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (04) : 1564 - 1573