Multi-Modal Domain Adaptation Variational Autoencoder for EEG-Based Emotion Recognition

被引:0
|
作者
Yixin Wang [1 ,2 ,3 ]
Shuang Qiu [1 ,2 ]
Dan Li [1 ,2 ,4 ]
Changde Du [1 ,2 ]
Bao-Liang Lu [5 ]
Huiguang He [1 ,2 ,6 ]
机构
[1] the Research Center for Brain-inspired Intelligence, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science
[2] the University of Chinese Academy of Sciences
[3] the Beijing Institute of Control and Electronic Technology
[4] the School of Mathematics and Information Sciences, Yantai University
[5] the Department of Computer Science and Engineering, Shanghai Jiao Tong University
[6] the Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Science
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TN911.7 [信号处理];
学科分类号
0711 ; 080401 ; 080402 ;
摘要
Traditional electroencephalograph(EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject, which restricts the application of the affective brain computer interface(BCI) in practice. We attempt to use the multi-modal data from the past session to realize emotion recognition in the case of a small amount of calibration samples. To solve this problem, we propose a multimodal domain adaptive variational autoencoder(MMDA-VAE)method, which learns shared cross-domain latent representations of the multi-modal data. Our method builds a multi-modal variational autoencoder(MVAE) to project the data of multiple modalities into a common space. Through adversarial learning and cycle-consistency regularization, our method can reduce the distribution difference of each domain on the shared latent representation layer and realize the transfer of knowledge.Extensive experiments are conducted on two public datasets,SEED and SEED-IV, and the results show the superiority of our proposed method. Our work can effectively improve the performance of emotion recognition with a small amount of labelled multi-modal data.
引用
下载
收藏
页码:1612 / 1626
页数:15
相关论文
共 50 条
  • [1] Multi-Modal Domain Adaptation Variational Auto-encoder for EEG-Based Emotion Recognition
    Wang, Yixin
    Qiu, Shuang
    Li, Dan
    Du, Changde
    Lu, Bao-Liang
    He, Huiguang
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (09) : 1612 - 1626
  • [2] WGAN Domain Adaptation for EEG-Based Emotion Recognition
    Luo, Yun
    Zhang, Si-Yang
    Zheng, Wei-Long
    Lu, Bao-Liang
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT V, 2018, 11305 : 275 - 286
  • [3] EEG-based Emotion Recognition Using Domain Adaptation Network
    Jin, Yi-Ming
    Luo, Yu-Dong
    Zheng, Wei-Long
    Lu, Bao-Liang
    PROCEEDINGS OF THE 2017 INTERNATIONAL CONFERENCE ON ORANGE TECHNOLOGIES (ICOT), 2017, : 222 - 225
  • [4] EEG-Based Multi-Modal Emotion Recognition using Bag of Deep Features: An Optimal Feature Selection Approach
    Asghar, Muhammad Adeel
    Khan, Muhammad Jamil
    Fawad
    Amin, Yasar
    Rizwan, Muhammad
    Rahman, MuhibUr
    Badnava, Salman
    Mirjavadi, Seyed Sajad
    SENSORS, 2019, 19 (23)
  • [5] TMLP plus SRDANN: A domain adaptation method for EEG-based emotion recognition
    Li, Wei
    Hou, Bowen
    Li, Xiaoyu
    Qiu, Ziming
    Peng, Bo
    Tian, Ye
    MEASUREMENT, 2023, 207
  • [6] An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis
    Li, Xiaotian
    Zhang, Xiang
    Yang, Huiyuan
    Duan, Wenna
    Dai, Weiying
    Yin, Lijun
    2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020), 2020, : 336 - 343
  • [7] A novel transformer autoencoder for multi-modal emotion recognition with incomplete data
    Cheng, Cheng
    Liu, Wenzhe
    Fan, Zhaoxin
    Feng, Lin
    Jia, Ziyu
    Neural Networks, 2024, 172
  • [8] A novel transformer autoencoder for multi-modal emotion recognition with incomplete data
    Cheng, Cheng
    Liu, Wenzhe
    Fan, Zhaoxin
    Feng, Lin
    Jia, Ziyu
    NEURAL NETWORKS, 2024, 172
  • [9] Human Emotion Estimation Using Multi-Modal Variational AutoEncoder with Time Changes
    Moroto, Yuya
    Maeda, Keisuke
    Ogawa, Takahiro
    Haseyama, Miki
    2021 IEEE 3RD GLOBAL CONFERENCE ON LIFE SCIENCES AND TECHNOLOGIES (IEEE LIFETECH 2021), 2021, : 67 - 68
  • [10] Multi-Modal Emotion Recognition Based On deep Learning Of EEG And Audio Signals
    Li, Zhongjie
    Zhang, Gaoyan
    Dang, Jianwu
    Wang, Longbiao
    Wei, Jianguo
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,