Categorizing objects from MEG signals using EEGNet

被引:5
|
作者
Shi, Ran [1 ]
Zhao, Yanyu [1 ]
Cao, Zhiyuan [1 ]
Liu, Chunyu [1 ]
Kang, Yi [1 ]
Zhang, Jiacai [1 ,2 ]
机构
[1] Beijing Normal Univ, Sch Artificial Intelligence, Beijing 100875, Peoples R China
[2] Minist Educ, Engn Res Ctr Intelligent Technol & Educ Applicat, Beijing 100875, Peoples R China
关键词
Neural decoding; Magnetoencephalography; Deep learning; Feature fusion; HUMAN BRAIN; PATTERN-ANALYSIS; VISUAL-IMAGERY; CLASSIFICATION; FREQUENCY; STREAM;
D O I
10.1007/s11571-021-09717-7
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Magnetoencephalography (MEG) signals have demonstrated their practical application to reading human minds. Current neural decoding studies have made great progress to build subject-wise decoding models to extract and discriminate the temporal/spatial features in neural signals. In this paper, we used a compact convolutional neural network-EEGNet-to build a common decoder across subjects, which deciphered the categories of objects (faces, tools, animals, and scenes) from MEG data. This study investigated the influence of the spatiotemporal structure of MEG on EEGNet's classification performance. Furthermore, the EEGNet replaced its convolution layers with two sets of parallel convolution structures to extract the spatial and temporal features simultaneously. Our results showed that the organization of MEG data fed into the EEGNet has an effect on EEGNet classification accuracy, and the parallel convolution structures in EEGNet are beneficial to extracting and fusing spatial and temporal MEG features. The classification accuracy demonstrated that the EEGNet succeeds in building the common decoder model across subjects, and outperforms several state-of-the-art feature fusing methods.
引用
收藏
页码:365 / 377
页数:13
相关论文
共 50 条
  • [1] Categorizing objects from MEG signals using EEGNet
    Ran Shi
    Yanyu Zhao
    Zhiyuan Cao
    Chunyu Liu
    Yi Kang
    Jiacai Zhang
    Cognitive Neurodynamics, 2022, 16 : 365 - 377
  • [2] Personality traits classification from EEG signals using EEGNet
    Guleva, Veronika
    Calcagno, Alessandra
    Reali, Pierluigi
    Bianchi, Anna Maria
    2022 IEEE 21ST MEDITERRANEAN ELECTROTECHNICAL CONFERENCE (IEEE MELECON 2022), 2022, : 590 - 594
  • [3] Categorizing Visual Objects; Using ERP Components
    Jadidi, Armita Faghani
    Zargar, Banafsheh Shafiei
    Moradi, Mohammad Hassan
    2016 23RD IRANIAN CONFERENCE ON BIOMEDICAL ENGINEERING AND 2016 1ST INTERNATIONAL IRANIAN CONFERENCE ON BIOMEDICAL ENGINEERING (ICBME), 2016, : 154 - 159
  • [4] Automated Classification of Cognitive Visual Objects Using Multivariate Swarm Sparse Decomposition From Multichannel EEG-MEG Signals
    Bhalerao, Shailesh Vitthalrao
    Pachori, Ram Bilas
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2024, 54 (04) : 455 - 464
  • [5] Deciphering Motor Imagery EEG Signals of Unilateral Upper Limb Movement using EEGNet
    Krishnamoorthy, Kiruthika
    Loganathan, Ashok Kumar
    ACTA SCIENTIARUM-TECHNOLOGY, 2025, 47 (01)
  • [6] Thesaurus racks: Categorizing rack objects
    Grosfjeld, Tobias
    JOURNAL OF KNOT THEORY AND ITS RAMIFICATIONS, 2021, 30 (04)
  • [7] Reduction of Metallic Interference in MEG Signals Using AMUSE
    Migliorelli, Carolina
    Romero, Sergio
    Alonso, Joan F.
    Nowak, Rafal
    Russi, Antonio
    Angel Mananas, Miguel
    2013 35TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2013, : 5970 - 5973
  • [8] Feature selection in categorizing activities by eye movements using electrooculograph signals
    Mala, S.
    Latha, K.
    2014 INTERNATIONAL CONFERENCE ON SCIENCE ENGINEERING AND MANAGEMENT RESEARCH (ICSEMR), 2014,
  • [9] SEMANTIC RECONSTRUCTION OF CONTINUOUS LANGUAGE FROM MEG SIGNALS
    Wang, Bo
    Xu, Xiran
    Zhang, Longxiang
    Xiao, Boda
    Wu, Xihong
    Chen, Jing
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 2190 - 2194
  • [10] Detecting objects is easier than categorizing them
    Bowers, Jeffrey S.
    Jones, Keely W.
    QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 2008, 61 (04): : 552 - 557