Emotion recognition based on multi-modal physiological signals and transfer learning

被引:14
|
作者
Fu, Zhongzheng [1 ]
Zhang, Boning [1 ]
He, Xinrun [1 ]
Li, Yixuan [1 ]
Wang, Haoyuan [1 ]
Huang, Jian [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automation, Wuhan, Peoples R China
关键词
emotion recognition; transfer learning; domain adaptation; physiological signal; multimodal fusion; individual difference;
D O I
10.3389/fnins.2022.1000716
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
In emotion recognition based on physiological signals, collecting enough labeled data of a single subject for training is time-consuming and expensive. The physiological signals' individual differences and the inherent noise will significantly affect emotion recognition accuracy. To overcome the difference in subject physiological signals, we propose a joint probability domain adaptation with the bi-projection matrix algorithm (JPDA-BPM). The bi-projection matrix method fully considers the source and target domain's different feature distributions. It can better project the source and target domains into the feature space, thereby increasing the algorithm's performance. We propose a substructure-based joint probability domain adaptation algorithm (SSJPDA) to overcome physiological signals' noise effect. This method can avoid the shortcomings that the domain level matching is too rough and the sample level matching is susceptible to noise. In order to verify the effectiveness of the proposed transfer learning algorithm in emotion recognition based on physiological signals, we verified it on the database for emotion analysis using physiological signals (DEAP dataset). The experimental results show that the average recognition accuracy of the proposed SSJPDA-BPM algorithm in the multimodal fusion physiological data from the DEAP dataset is 63.6 and 64.4% in valence and arousal, respectively. Compared with joint probability domain adaptation (JPDA), the performance of valence and arousal recognition accuracy increased by 17.6 and 13.4%, respectively.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Remarks on emotion recognition from multi-modal bio-potential signals
    Takahashi, K
    2004 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), VOLS. 1- 3, 2004, : 1138 - 1143
  • [22] Multi-Modal Fusion Emotion Recognition Based on HMM and ANN
    Xu, Chao
    Cao, Tianyi
    Feng, Zhiyong
    Dong, Caichao
    CONTEMPORARY RESEARCH ON E-BUSINESS TECHNOLOGY AND STRATEGY, 2012, 332 : 541 - 550
  • [23] A multi-modal interaction robot based on emotion estimation method using physiological signals applied for elderly
    Suzuki, Kaoru
    Iguchi, Takumi
    Nakagawa, Yuri
    Sugaya, Midori
    2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 2051 - 2057
  • [24] Multi-modal Attention for Speech Emotion Recognition
    Pan, Zexu
    Luo, Zhaojie
    Yang, Jichen
    Li, Haizhou
    INTERSPEECH 2020, 2020, : 364 - 368
  • [25] Towards Efficient Multi-Modal Emotion Recognition
    Dobrisek, Simon
    Gajsek, Rok
    Mihelic, France
    Pavesic, Nikola
    Struc, Vitomir
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2013, 10
  • [26] Evaluation and Discussion of Multi-modal Emotion Recognition
    Rabie, Ahmad
    Wrede, Britta
    Vogt, Thurid
    Hanheide, Marc
    SECOND INTERNATIONAL CONFERENCE ON COMPUTER AND ELECTRICAL ENGINEERING, VOL 1, PROCEEDINGS, 2009, : 598 - +
  • [27] Emotion Recognition from Multi-Modal Information
    Wu, Chung-Hsien
    Lin, Jen-Chun
    Wei, Wen-Li
    Cheng, Kuan-Chun
    2013 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2013,
  • [28] Hybrid densenet with long short-term memory model for multi-modal emotion recognition from physiological signals
    Pradhan, Anushka
    Srivastava, Subodh
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (12) : 35221 - 35251
  • [29] Hybrid densenet with long short-term memory model for multi-modal emotion recognition from physiological signals
    Anushka Pradhan
    Subodh Srivastava
    Multimedia Tools and Applications, 2024, 83 : 35221 - 35251
  • [30] Remarks on SVM-based emotion recognition from multi-modal bio-potential signals
    Takahashi, K
    RO-MAN 2004: 13TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, PROCEEDINGS, 2004, : 95 - 100