Masked self-supervised pre-training model for EEG-based emotion recognition

被引:0
|
作者
Hu, Xinrong [1 ,2 ]
Chen, Yu [1 ,2 ]
Yan, Jinlin [1 ,2 ]
Wu, Yuan [1 ,2 ]
Ding, Lei [1 ,2 ]
Xu, Jin [1 ,2 ]
Cheng, Jun [1 ,2 ]
机构
[1] Engn Res Ctr Hubei Prov Clothing Informat, Wuhan, Peoples R China
[2] Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
关键词
affective computing; brain-computer interface; EEG; emotion recognition; pre-trained models; POSTURE;
D O I
10.1111/coin.12659
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Electroencephalogram (EEG), as a tool capable of objectively recording brain electrical signals during emotional expression, has been extensively utilized. Current technology heavily relies on datasets, with its performance being limited by the size of the dataset and the accuracy of its annotations. At the same time, unsupervised learning and contrastive learning methods largely depend on the feature distribution within datasets, thus requiring training tailored to specific datasets for optimal results. However, the collection of EEG signals is influenced by factors such as equipment, settings, individuals, and experimental procedures, resulting in significant variability. Consequently, the effectiveness of models is heavily dependent on dataset collection efforts conducted under stringent objective conditions. To address these challenges, we introduce a novel approach: employing a self-supervised pre-training model, to process data across different datasets. This model is capable of operating effectively across multiple datasets. The model conducts self-supervised pre-training without the need for direct access to specific emotion category labels, enabling it to pre-train and extract universally useful features without predefined downstream tasks. To tackle the issue of semantic expression confusion, we employed a masked prediction model that guides the model to generate richer semantic information through learning bidirectional feature combinations in sequence. Addressing challenges such as significant differences in data distribution, we introduced adaptive clustering techniques that manage by generating pseudo-labels across multiple categories. The model is capable of enhancing the expression of hidden features in intermediate layers during the self-supervised training process, enabling it to learn common hidden features across different datasets. This study, by constructing a hybrid dataset and conducting extensive experiments, demonstrated two key findings: (1) our model performs best on multiple evaluation metrics; (2) the model can effectively integrate critical features from different datasets, significantly enhancing the accuracy of emotion recognition.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] Masked Feature Prediction for Self-Supervised Visual Pre-Training
    Wei, Chen
    Fan, Haoqi
    Xie, Saining
    Wu, Chao-Yuan
    Yuille, Alan
    Feichtenhofer, Christoph
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14648 - 14658
  • [2] GANSER: A Self-Supervised Data Augmentation Framework for EEG-Based Emotion Recognition
    Zhang, Zhi
    Liu, Yan
    Zhong, Sheng-hua
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 2048 - 2063
  • [3] Self-supervised group meiosis contrastive learning for EEG-based emotion recognition
    Kan, Haoning
    Yu, Jiale
    Huang, Jiajin
    Liu, Zihe
    Wang, Heqian
    Zhou, Haiyan
    APPLIED INTELLIGENCE, 2023, 53 (22) : 27207 - 27225
  • [4] Self-supervised group meiosis contrastive learning for EEG-based emotion recognition
    Haoning Kan
    Jiale Yu
    Jiajin Huang
    Zihe Liu
    Heqian Wang
    Haiyan Zhou
    Applied Intelligence, 2023, 53 : 27207 - 27225
  • [5] Masked Autoencoder for Self-Supervised Pre-training on Lidar Point Clouds
    Hess, Georg
    Jaxing, Johan
    Svensson, Elias
    Hagerman, David
    Petersson, Christoffer
    Svensson, Lennart
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW), 2023, : 350 - 359
  • [6] Self-supervised ECG pre-training
    Liu, Han
    Zhao, Zhenbo
    She, Qiang
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 70
  • [7] GUIDED CONTRASTIVE SELF-SUPERVISED PRE-TRAINING FOR AUTOMATIC SPEECH RECOGNITION
    Khare, Aparna
    Wu, Minhua
    Bhati, Saurabhchand
    Droppo, Jasha
    Maas, Roland
    2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 174 - 181
  • [8] FALL DETECTION USING SELF-SUPERVISED PRE-TRAINING MODEL
    Yhdego, Haben
    Audette, Michel
    Paolini, Christopher
    PROCEEDINGS OF THE 2022 ANNUAL MODELING AND SIMULATION CONFERENCE (ANNSIM'22), 2022, : 361 - 371
  • [9] Self-supervised Pre-training for Mirror Detection
    Lin, Jiaying
    Lau, Rynson W. H.
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12193 - 12202
  • [10] EFFECTIVENESS OF SELF-SUPERVISED PRE-TRAINING FOR ASR
    Baevski, Alexei
    Mohamed, Abdelrahman
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7694 - 7698