Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals

被引:10
|
作者
Luo, Junhai [1 ]
Tian, Yuxin [1 ]
Yu, Hang [1 ]
Chen, Yu [1 ]
Wu, Man [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 610056, Peoples R China
关键词
DEAP dataset; electroencephalogram (EEG); emotion recognition; multi-source fusion; stacked denoising autoencoder; unsupervised representation learning; FEATURE-EXTRACTION; TIME-SERIES; EEG; REPRESENTATIONS;
D O I
10.3390/e24050577
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one's emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method's practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features.
引用
收藏
页数:29
相关论文
共 26 条
  • [1] Semi-supervised Multi-modal Emotion Recognition with Cross-Modal Distribution Matching
    Liang, Jingjun
    Li, Ruichen
    Jin, Qin
    [J]. MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 2852 - 2861
  • [2] MGFKD: A semi-supervised multi-source domain adaptation algorithm for cross-subject EEG emotion recognition
    Zhang, Rui
    Guo, Huifeng
    Xu, Zongxin
    Hu, Yuxia
    Chen, Mingming
    Zhang, Lipeng
    [J]. BRAIN RESEARCH BULLETIN, 2024, 208
  • [3] SMIN: Semi-Supervised Multi-Modal Interaction Network for Conversational Emotion Recognition
    Lian, Zheng
    Liu, Bin
    Tao, Jianhua
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 2415 - 2429
  • [4] Emotion recognition based on multi-modal physiological signals and transfer learning
    Fu, Zhongzheng
    Zhang, Boning
    He, Xinrun
    Li, Yixuan
    Wang, Haoyuan
    Huang, Jian
    [J]. FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [5] CFDA-CSF: A Multi-Modal Domain Adaptation Method for Cross-Subject Emotion Recognition
    Jimenez-Guarneros, Magdiel
    Fuentes-Pineda, Gibran
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (03) : 1502 - 1513
  • [6] Markov random field based fusion for supervised and semi-supervised multi-modal image classification
    Xie, Liang
    Pan, Peng
    Lu, Yansheng
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2015, 74 (02) : 613 - 634
  • [7] Markov random field based fusion for supervised and semi-supervised multi-modal image classification
    Liang Xie
    Peng Pan
    Yansheng Lu
    [J]. Multimedia Tools and Applications, 2015, 74 : 613 - 634
  • [8] Human Activity Recognition Using Semi-supervised Multi-modal DEC for Instagram Data
    Kim, Dongmin
    Han, Sumin
    Son, Heesuk
    Lee, Dongman
    [J]. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2020, PT I, 2020, 12084 : 869 - 880
  • [9] Multi-modal emotion recognition using recurrence plots and transfer learning on physiological signals
    Elalamy, Rayan
    Fanourakis, Marios
    Chanel, Guillaume
    [J]. 2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2021,
  • [10] Entropy-Assisted Multi-Modal Emotion Recognition Framework Based on Physiological Signals
    Tung, Kuan
    Liu, Po-Kang
    Chuang, Yu-Chuan
    Wang, Sheng-Hui
    Wu, An-Yeu
    [J]. 2018 IEEE-EMBS CONFERENCE ON BIOMEDICAL ENGINEERING AND SCIENCES (IECBES), 2018, : 22 - 26