Masked self-supervised pre-training model for EEG-based emotion recognition

被引:0
|
作者
Hu, Xinrong [1 ,2 ]
Chen, Yu [1 ,2 ]
Yan, Jinlin [1 ,2 ]
Wu, Yuan [1 ,2 ]
Ding, Lei [1 ,2 ]
Xu, Jin [1 ,2 ]
Cheng, Jun [1 ,2 ]
机构
[1] Engn Res Ctr Hubei Prov Clothing Informat, Wuhan, Peoples R China
[2] Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
关键词
affective computing; brain-computer interface; EEG; emotion recognition; pre-trained models; POSTURE;
D O I
10.1111/coin.12659
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Electroencephalogram (EEG), as a tool capable of objectively recording brain electrical signals during emotional expression, has been extensively utilized. Current technology heavily relies on datasets, with its performance being limited by the size of the dataset and the accuracy of its annotations. At the same time, unsupervised learning and contrastive learning methods largely depend on the feature distribution within datasets, thus requiring training tailored to specific datasets for optimal results. However, the collection of EEG signals is influenced by factors such as equipment, settings, individuals, and experimental procedures, resulting in significant variability. Consequently, the effectiveness of models is heavily dependent on dataset collection efforts conducted under stringent objective conditions. To address these challenges, we introduce a novel approach: employing a self-supervised pre-training model, to process data across different datasets. This model is capable of operating effectively across multiple datasets. The model conducts self-supervised pre-training without the need for direct access to specific emotion category labels, enabling it to pre-train and extract universally useful features without predefined downstream tasks. To tackle the issue of semantic expression confusion, we employed a masked prediction model that guides the model to generate richer semantic information through learning bidirectional feature combinations in sequence. Addressing challenges such as significant differences in data distribution, we introduced adaptive clustering techniques that manage by generating pseudo-labels across multiple categories. The model is capable of enhancing the expression of hidden features in intermediate layers during the self-supervised training process, enabling it to learn common hidden features across different datasets. This study, by constructing a hybrid dataset and conducting extensive experiments, demonstrated two key findings: (1) our model performs best on multiple evaluation metrics; (2) the model can effectively integrate critical features from different datasets, significantly enhancing the accuracy of emotion recognition.
引用
收藏
页数:26
相关论文
共 50 条
  • [31] Dense Contrastive Learning for Self-Supervised Visual Pre-Training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    Li, Lei
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3023 - 3032
  • [32] Self-supervised VICReg pre-training for Brugada ECG detection
    Robert Ronan
    Constantine Tarabanis
    Larry Chinitz
    Lior Jankelson
    Scientific Reports, 15 (1)
  • [33] A Self-Supervised Pre-Training Method for Chinese Spelling Correction
    Su J.
    Yu S.
    Hong X.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2023, 51 (09): : 90 - 98
  • [34] Self-Supervised EEG Representation Learning for Robust Emotion Recognition
    Liu, Huan
    Zhang, Yuzhe
    Chen, Xuxu
    Zhang, Dalin
    Li, Rui
    Qin, Tao
    ACM Transactions on Sensor Networks, 2024, 20 (05)
  • [35] Self-supervised pre-training on industrial time-series
    Biggio, Luca
    Kastanis, Iason
    2021 8TH SWISS CONFERENCE ON DATA SCIENCE, SDS, 2021, : 56 - 57
  • [36] Self-supervised Pre-training for Semantic Segmentation in an Indoor Scene
    Shrestha, Sulabh
    Li, Yimeng
    Kosecka, Jana
    2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 625 - 635
  • [37] Comparing Self-Supervised Pre-Training and Semi-Supervised Training for Speech Recognition in Languages with Weak Language Models
    Lam-Yee-Mui, Lea-Marie
    Yang, Lucas Ondel
    Klejch, Ondrej
    INTERSPEECH 2023, 2023, : 87 - 91
  • [38] SPAKT: A Self-Supervised Pre-TrAining Method for Knowledge Tracing
    Ma, Yuling
    Han, Peng
    Qiao, Huiyan
    Cui, Chaoran
    Yin, Yilong
    Yu, Dehu
    IEEE ACCESS, 2022, 10 : 72145 - 72154
  • [39] CDS: Cross-Domain Self-supervised Pre-training
    Kim, Donghyun
    Saito, Kuniaki
    Oh, Tae-Hyun
    Plummer, Bryan A.
    Sclaroff, Stan
    Saenko, Kate
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9103 - 9112
  • [40] A SELF-SUPERVISED PRE-TRAINING FRAMEWORK FOR VISION-BASED SEIZURE CLASSIFICATION
    Hou, Jen-Cheng
    McGonigal, Aileen
    Bartolomei, Fabrice
    Thonnat, Monique
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1151 - 1155