DOMAIN-INVARIANT REPRESENTATION LEARNING FROM EEG WITH PRIVATE ENCODERS

被引:13
|
作者
Bethge, David [1 ,2 ]
Hallgarten, Philipp [1 ,3 ]
Grosse-Puppendahl, Tobias [1 ]
Kari, Mohamed [1 ]
Mikut, Ralf [3 ]
Schmidt, Albrecht [2 ]
Oezdenizci, Ozan [4 ,5 ]
机构
[1] Dr Ing Hc F Porsche AG, Stuttgart, Germany
[2] Ludwig Maximilians Univ Munchen, Munich, Germany
[3] Karlsruhe Inst Technol, Karlsruhe, Germany
[4] Graz Univ Technol, Inst Theoret Comp Sci, Graz, Austria
[5] Graz Univ Technol, Silicon Austria Labs, SAL Dependable Embedded Syst Lab, Graz, Austria
关键词
CROSS-SUBJECT; ADAPTATION;
D O I
10.1109/ICASSP43922.2022.9747398
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Deep learning based electroencephalography (EEG) signal processing methods are known to suffer from poor test-time generalization due to the changes in data distribution. This becomes a more challenging problem when privacy-preserving representation learning is of interest such as in clinical settings. To that end, we propose a multi-source learning architecture where we extract domain-invariant representations from dataset-specific private encoders. Our model utilizes a maximum-mean-discrepancy (MMD) based domain alignment approach to impose domain-invariance for encoded representations, which outperforms state-of-the-art approaches in EEG-based emotion classification. Furthermore, representations learned in our pipeline preserve domain privacy as dataset-specific private encoding alleviates the need for conventional, centralized EEG-based deep neural network training approaches with shared parameters.
引用
收藏
页码:1236 / 1240
页数:5
相关论文
共 50 条
  • [31] DALSCLIP: Domain aggregation via learning stronger domain-invariant features for CLIP
    Zhang, Yuewen
    Wang, Jiuhang
    Tang, Hongying
    Qin, Ronghua
    IMAGE AND VISION COMPUTING, 2025, 154
  • [32] DOMAIN-INVARIANT FEATURE LEARNING FOR CROSS CORPUS SPEECH EMOTION RECOGNITION
    Gao, Yuan
    Okada, Shogo
    Wang, Longbiao
    Liu, Jiaxing
    Dang, Jianwu
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6427 - 6431
  • [33] DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery Clues
    Pan, Kun
    Yin, Yifang
    Wei, Yao
    Lin, Feng
    Ba, Zhongjie
    Liu, Zhenguang
    Wang, Zhibo
    Cavallaro, Lorenzo
    Ren, Kui
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8035 - 8046
  • [34] MVAD-Net: Learning View-Aware and Domain-Invariant Representation for Baggage Re-identification
    Zhao, Qing
    Ma, Huimin
    Lu, Ruiqi
    Chen, Yanxian
    Li, Dong
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, 2021, 13019 : 142 - 153
  • [35] Domain-invariant feature learning with label information integration for cross-domain classification
    Jiang L.
    Wu J.
    Zhao S.
    Li J.
    Neural Computing and Applications, 2024, 36 (21) : 13107 - 13126
  • [36] Displacement reconstruction with frequency domain-invariant representation for enhancing track damage identification
    Wang, Shaohua
    Hu, Meng
    Tang, Lihua
    Kim, Minjung
    Aw, Kean C.
    ENGINEERING STRUCTURES, 2025, 328
  • [37] Answering Spatial Commonsense Questions by Learning Domain-Invariant Generalization Knowledge
    Lin, Miaopei
    Yu, Jianxing
    Wang, Shiqi
    Lai, Hanjiang
    Liu, Wei
    Yin, Jian
    WEB AND BIG DATA, PT II, APWEB-WAIM 2023, 2024, 14332 : 270 - 285
  • [38] Feature-aware domain invariant representation learning for EEG motor imagery decoding
    Li, Jianxiu
    Shi, Jiaxin
    Yu, Pengda
    Yan, Xiaokai
    Lin, Yuting
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [39] Enhancing Domain-Invariant Parts for Generalized Zero-Shot Learning
    Zhang, Yang
    Feng, Songhe
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 6283 - 6291
  • [40] Learning a Domain-Invariant Embedding for Unsupervised Person Re-identification
    Pu, Nan
    Georgiou, T. K.
    Bakker, Erwin M.
    Lew, Michael S.
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,