MutaPT: A Multi-Task Pre-Trained Transformer for Classifying State of Disorders of Consciousness Using EEG Signal

被引:0
|
作者
Wang, Zihan [1 ]
Yu, Junqi [1 ]
Gao, Jiahui [2 ]
Bai, Yang [3 ]
Wan, Zhijiang [1 ,4 ]
机构
[1] Nanchang Univ, Sch Informat Engn, Nanchang 330031, Peoples R China
[2] Nanchang Univ, Sch Publ Policy & Adm, Nanchang 330031, Peoples R China
[3] Nanchang Univ, Jiangxi Med Coll, Affiliated Rehabil Hosp, Nanchang 330031, Peoples R China
[4] Nanchang Univ, Ind Inst Artificial Intelligence, Nanchang 330031, Peoples R China
关键词
disorders of consciousness; deep learning; self-supervised pre-training; transformer; DOC state classification; CONVOLUTIONAL NEURAL-NETWORKS;
D O I
10.3390/brainsci14070688
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Deep learning (DL) has been demonstrated to be a valuable tool for classifying state of disorders of consciousness (DOC) using EEG signals. However, the performance of the DL-based DOC state classification is often challenged by the limited size of EEG datasets. To overcome this issue, we introduce multiple open-source EEG datasets to increase data volume and train a novel multi-task pre-training Transformer model named MutaPT. Furthermore, we propose a cross-distribution self-supervised (CDS) pre-training strategy to enhance the model's generalization ability, addressing data distribution shifts across multiple datasets. An EEG dataset of DOC patients is used to validate the effectiveness of our methods for the task of classifying DOC states. Experimental results show the superiority of our MutaPT over several DL models for EEG classification.
引用
收藏
页数:11
相关论文
共 17 条
  • [1] Multi-task Active Learning for Pre-trained Transformer-based Models
    Rotman, Guy
    Reichart, Roi
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 1209 - 1228
  • [2] MCM: A Multi-task Pre-trained Customer Model for Personalization
    Luo, Rui
    Wang, Tianxin
    Deng, Jingyuan
    Wan, Peng
    [J]. PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 637 - 639
  • [3] Drug knowledge discovery via multi-task learning and pre-trained models
    Li, Dongfang
    Xiong, Ying
    Hu, Baotian
    Tang, Buzhou
    Peng, Weihua
    Chen, Qingcai
    [J]. BMC MEDICAL INFORMATICS AND DECISION MAKING, 2021, 21 (SUPPL 9)
  • [4] Enhancing Pre-trained Language Representation for Multi-Task Learning of Scientific Summarization
    Jia, Ruipeng
    Cao, Yannan
    Fang, Fang
    Li, Jinpeng
    Liu, Yanbing
    Yin, Pengfei
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [5] Drug knowledge discovery via multi-task learning and pre-trained models
    Dongfang Li
    Ying Xiong
    Baotian Hu
    Buzhou Tang
    Weihua Peng
    Qingcai Chen
    [J]. BMC Medical Informatics and Decision Making, 21
  • [6] Multi-task Learning based Pre-trained Language Model for Code Completion
    Liu, Fang
    Li, Ge
    Zhao, Yunfei
    Jin, Zhi
    [J]. 2020 35TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2020), 2020, : 473 - 485
  • [7] MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders Are Better Dense Retrievers
    Zhou, Kun
    Liu, Xiao
    Gong, Yeyun
    Zhao, Wayne Xin
    Jiang, Daxin
    Duan, Nan
    Wen, Ji-Rong
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 630 - 647
  • [8] Multi-task Learning Based Online Dialogic Instruction Detection with Pre-trained Language Models
    Hao, Yang
    Li, Hang
    Ding, Wenbiao
    Wu, Zhongqin
    Tang, Jiliang
    Luckin, Rose
    Liu, Zitao
    [J]. ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2021), PT II, 2021, 12749 : 183 - 189
  • [9] PTMB: An online satellite task scheduling framework based on pre-trained Markov decision process for multi-task scenario
    Li, Guohao
    Li, Xuefei
    Li, Jing
    Chen, Jia
    Shen, Xin
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 284
  • [10] JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving
    Zhao, Wayne Xin
    Zhou, Kun
    Zhang, Beichen
    Gong, Zheng
    Chen, Zhipeng
    Zhou, Yuanhang
    Wen, Ji-Rong
    Sha, Jing
    Wang, Shijin
    Liu, Cong
    Hu, Guoping
    [J]. PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 5660 - 5672