Masked self-supervised pre-training model for EEG-based emotion recognition

被引:0
|
作者
Hu, Xinrong [1 ,2 ]
Chen, Yu [1 ,2 ]
Yan, Jinlin [1 ,2 ]
Wu, Yuan [1 ,2 ]
Ding, Lei [1 ,2 ]
Xu, Jin [1 ,2 ]
Cheng, Jun [1 ,2 ]
机构
[1] Engn Res Ctr Hubei Prov Clothing Informat, Wuhan, Peoples R China
[2] Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
关键词
affective computing; brain-computer interface; EEG; emotion recognition; pre-trained models; POSTURE;
D O I
10.1111/coin.12659
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Electroencephalogram (EEG), as a tool capable of objectively recording brain electrical signals during emotional expression, has been extensively utilized. Current technology heavily relies on datasets, with its performance being limited by the size of the dataset and the accuracy of its annotations. At the same time, unsupervised learning and contrastive learning methods largely depend on the feature distribution within datasets, thus requiring training tailored to specific datasets for optimal results. However, the collection of EEG signals is influenced by factors such as equipment, settings, individuals, and experimental procedures, resulting in significant variability. Consequently, the effectiveness of models is heavily dependent on dataset collection efforts conducted under stringent objective conditions. To address these challenges, we introduce a novel approach: employing a self-supervised pre-training model, to process data across different datasets. This model is capable of operating effectively across multiple datasets. The model conducts self-supervised pre-training without the need for direct access to specific emotion category labels, enabling it to pre-train and extract universally useful features without predefined downstream tasks. To tackle the issue of semantic expression confusion, we employed a masked prediction model that guides the model to generate richer semantic information through learning bidirectional feature combinations in sequence. Addressing challenges such as significant differences in data distribution, we introduced adaptive clustering techniques that manage by generating pseudo-labels across multiple categories. The model is capable of enhancing the expression of hidden features in intermediate layers during the self-supervised training process, enabling it to learn common hidden features across different datasets. This study, by constructing a hybrid dataset and conducting extensive experiments, demonstrated two key findings: (1) our model performs best on multiple evaluation metrics; (2) the model can effectively integrate critical features from different datasets, significantly enhancing the accuracy of emotion recognition.
引用
收藏
页数:26
相关论文
共 50 条
  • [41] DiT: Self-supervised Pre-training for Document Image Transformer
    Li, Junlong
    Xu, Yiheng
    Lv, Tengchao
    Cui, Lei
    Zhang, Cha
    Wei, Furu
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3530 - 3539
  • [42] MEASURING THE IMPACT OF DOMAIN FACTORS IN SELF-SUPERVISED PRE-TRAINING
    Sanabria, Ramon
    Wei-Ning, Hsu
    Alexei, Baevski
    Auli, Michael
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [43] Correlational Image Modeling for Self-Supervised Visual Pre-Training
    Li, Wei
    Xie, Jiahao
    Loy, Chen Change
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 15105 - 15115
  • [44] Contrastive Self-Supervised Pre-Training for Video Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Jinjian
    Dong, Weisheng
    Shi, Guangming
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 458 - 471
  • [45] Self-Supervised Global Spatio-Temporal Interaction Pre-Training for Group Activity Recognition
    Du, Zexing
    Wang, Xue
    Wang, Qing
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 5076 - 5088
  • [46] GO-MAE: Self-supervised pre-training via masked autoencoder for OCT image classification of gynecology
    Wang, Haoran
    Guo, Xinyu
    Song, Kaiwen
    Sun, Mingyang
    Shao, Yanbin
    Xue, Songfeng
    Zhang, Hongwei
    Zhang, Tianyu
    Neural Networks, 2025, 181
  • [47] Intra-modality masked image modeling: A self-supervised pre-training method for brain tumor segmentation
    Qi, Liangce
    Shi, Weili
    Miao, Yu
    Li, Yonghui
    Feng, Guanyuan
    Jiang, Zhengang
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 95
  • [48] A Multilingual Framework Based on Pre-training Model for Speech Emotion Recognition
    Zhang, Zhaohang
    Zhang, Xiaohui
    Guo, Min
    Zhang, Wei-Qiang
    Li, Ke
    Huang, Yukai
    2021 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2021, : 750 - 755
  • [49] Complementary Mask Self-Supervised Pre-training Based on Teacher-Student Network
    Ye, Shaoxiong
    Huang, Jing
    Zhu, Lifu
    2023 3RD ASIA-PACIFIC CONFERENCE ON COMMUNICATIONS TECHNOLOGY AND COMPUTER SCIENCE, ACCTCS, 2023, : 199 - 206
  • [50] AN ADAPTER BASED PRE-TRAINING FOR EFFICIENT AND SCALABLE SELF-SUPERVISED SPEECH REPRESENTATION LEARNING
    Kessler, Samuel
    Thomas, Bethan
    Karout, Salah
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3179 - 3183