Enhancing cross-subject EEG emotion recognition through multi-source manifold metric transfer learning

被引:0
|
作者
Shi X. [1 ]
She Q. [1 ,4 ]
Fang F. [2 ]
Meng M. [1 ,4 ]
Tan T. [3 ]
Zhang Y. [2 ]
机构
[1] School of Automation, Hangzhou Dianzi University, Zhejiang, Hangzhou
[2] Department of Biomedical Engineering, University of Miami, Coral Gables, FL
[3] Department of Rehabilitation, Medicine, Zhejiang Provincial People's Hospital, People's Hospital of Hangzhou Medical College, Zhejiang, Hangzhou
[4] International Joint Research Laboratory for Autonomous Robotic Systems, Zhejiang, Hangzhou
基金
中国国家自然科学基金;
关键词
Affective brain-computer interface (aBCI); Electroencephalogram (EEG); Emotion recognition; Metric transfer learning;
D O I
10.1016/j.compbiomed.2024.108445
中图分类号
学科分类号
摘要
Transfer learning (TL) has demonstrated its efficacy in addressing the cross-subject domain adaptation challenges in affective brain-computer interfaces (aBCI). However, previous TL methods usually use a stationary distance, such as Euclidean distance, to quantify the distribution dissimilarity between two domains, overlooking the inherent links among similar samples, potentially leading to suboptimal feature mapping. In this study, we introduced a novel algorithm called multi-source manifold metric transfer learning (MSMMTL) to enhance the efficacy of conventional TL. Specifically, we first selected the source domain based on Mahalanobis distance to enhance the quality of the source domains and then used manifold feature mapping approach to map the source and target domains on the Grassmann manifold to mitigate data drift between domains. In this newly established shared space, we optimized the Mahalanobis metric by maximizing the inter-class distances while minimizing the intra-class distances in the target domain. Recognizing that significant distribution discrepancies might persist across different domains even on the manifold, to ensure similar distributions between the source and target domains, we further imposed constraints on both domains under the Mahalanobis metric. This approach aims to reduce distributional disparities and enhance the electroencephalogram (EEG) emotion recognition performance. In cross-subject experiments, the MSMMTL model exhibits average classification accuracies of 88.83 % and 65.04 % for SEED and DEAP, respectively, underscoring the superiority of our proposed MSMMTL over other state-of-the-art methods. MSMMTL can effectively solve the problem of individual differences in EEG-based affective computing. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [41] Comprehensive Multisource Learning Network for Cross-Subject Multimodal Emotion Recognition
    Chen, Chuangquan
    Li, Zhencheng
    Kou, Kit Ian
    Du, Jie
    Li, Chen
    Wang, Hongtao
    Vong, Chi-Man
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, : 1 - 16
  • [42] Multisource Associate Domain Adaptation for Cross-Subject and Cross-Session EEG Emotion Recognition
    She, Qingshan
    Zhang, Chenqi
    Fang, Feng
    Ma, Yuliang
    Zhang, Yingchun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [43] Multi-method Fusion of Cross-Subject Emotion Recognition Based on High-Dimensional EEG Features
    Yang, Fu
    Zhao, Xingcong
    Jiang, Wenge
    Gao, Pengfei
    Liu, Guangyuan
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2019, 13
  • [44] EEG Feature Selection for Emotion Recognition Based on Cross-subject Recursive Feature Elimination
    Zhang, Wei
    Yin, Zhong
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 6256 - 6261
  • [45] Cross-Subject EEG Emotion Recognition With Self-Organized Graph Neural Network
    Li, Jingcong
    Li, Shuqi
    Pan, Jiahui
    Wang, Fei
    FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [46] Cross-subject MEG Transfer Learning by Riemannian Manifold and Feature Subspace Alignment
    Liu, Shihao
    Yu, Tianyou
    Huang, Zebin
    Ye, Hengfeng
    2020 INTERNATIONAL SYMPOSIUM ON AUTONOMOUS SYSTEMS (ISAS), 2020, : 12 - 16
  • [47] Easy Domain Adaptation for cross-subject multi-view emotion recognition
    Chen, Chuangquan
    Vong, Chi-Man
    Wang, Shitong
    Wang, Hongtao
    Pang, Miaoqi
    KNOWLEDGE-BASED SYSTEMS, 2022, 239
  • [48] A deep subdomain associate adaptation network for cross-session and cross-subject EEG emotion recognition
    Meng, Ming
    Hu, Jiahao
    Gao, Yunyuan
    Kong, Wanzeng
    Luo, Zhizeng
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 78
  • [49] Emotion recognition while applying cosmetic cream using deep learning from EEG data; cross-subject analysis
    Kim, Jieun
    Hwang, Dong-Uk
    Son, Edwin J.
    Oh, Sang Hoon
    Kim, Whansun
    Kim, Youngkyung
    Kwon, Gusang
    PLOS ONE, 2022, 17 (11):
  • [50] Label-Based Alignment Multi-Source Domain Adaptation for Cross-Subject EEG Fatigue Mental State Evaluation
    Zhao, Yue
    Dai, Guojun
    Borghini, Gianluca
    Zhang, Jiaming
    Li, Xiufeng
    Zhang, Zhenyan
    Arico, Pietro
    Di Flumeri, Gianluca
    Babiloni, Fabio
    Zeng, Hong
    FRONTIERS IN HUMAN NEUROSCIENCE, 2021, 15