AFLEMP: Attention-based Federated Learning for Emotion recognition using Multi-modal Physiological data

被引:1
|
作者
Gahlan, Neha [1 ]
Sethia, Divyashikha [1 ]
机构
[1] Delhi Technol Univ, Delhi 110042, India
关键词
Federated Learning; Emotion recognition; Multi-modal Physiological data; Attention mechanisms; Data heterogeneity; DIMENSIONALITY REDUCTION; EEG; SIGNAL;
D O I
10.1016/j.bspc.2024.106353
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Automated emotion recognition systems utilizing physiological signals are essential for affective computing and intelligent interaction. Combining the multiple physiological signals is more precise and effective in accurately assessing a person's emotional state. These automated emotion recognition systems using conventional machine learning techniques require complete access to the physiological data for emotion state classification, compromising sensitive data privacy. Federated Learning (FL) resolves this issue by preserving the user's privacy and sensitive physiological data while recognizing emotions. However, existing FL methods have limitations in handling data heterogeneity in the physiological data and do not measure communication efficiency and scalability. In response to these challenges, this paper proposes a unique novel framework called AFLEMP (Attention-based Federated Learning for Emotion recognition using Multi -modal Physiological data) integrating attention mechanism-based Transformer with an Artificial Neural Network (ANN) model. The framework reduces two types of data heterogeneity: (1) Variation Heterogeneity (VH) in multi -modal EEG, GSR, and ECG physiological signal data using attention mechanisms and (2) Imbalanced Data Heterogeneity (IDH) in the FL environment using scaled weighted federated averaging. This paper validates the proposed AFLEMP framework on two publicly available emotion datasets, AMIGOS and DREAMER, achieving an average accuracy of 88.30% and 84.10%, respectively. The proposed AFLEMP framework proves robust, scalable, and efficient in communication. AFLEMP is the first FL framework to propose for emotion recognition using multi -modal physiological signals while reducing data heterogeneity and outperforming existing FL methods.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Federated learning inspired privacy sensitive emotion recognition based on multi-modal physiological sensors
    Gahlan, Neha
    Sethia, Divyashikha
    [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (03): : 3179 - 3201
  • [2] Emotion recognition based on multi-modal physiological signals and transfer learning
    Fu, Zhongzheng
    Zhang, Boning
    He, Xinrun
    Li, Yixuan
    Wang, Haoyuan
    Huang, Jian
    [J]. FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [3] Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN
    Huddar, Mahesh G.
    Sannakki, Sanjeev S.
    Rajpurohit, Vijay S.
    [J]. INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2021, 6 (06): : 112 - 121
  • [4] Multi-modal Attention for Speech Emotion Recognition
    Pan, Zexu
    Luo, Zhaojie
    Yang, Jichen
    Li, Haizhou
    [J]. INTERSPEECH 2020, 2020, : 364 - 368
  • [5] Multi-modal emotion recognition using recurrence plots and transfer learning on physiological signals
    Elalamy, Rayan
    Fanourakis, Marios
    Chanel, Guillaume
    [J]. 2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2021,
  • [6] ATTENTION DRIVEN FUSION FOR MULTI-MODAL EMOTION RECOGNITION
    Priyasad, Darshana
    Fernando, Tharindu
    Denman, Simon
    Sridharan, Sridha
    Fookes, Clinton
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3227 - 3231
  • [7] Emotion recognition with multi-modal peripheral physiological signals
    Gohumpu, Jennifer
    Xue, Mengru
    Bao, Yanchi
    [J]. FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [8] Emotion classification with multi-modal physiological signals using multi-attention-based neural network
    Zou, Chengsheng
    Deng, Zhen
    He, Bingwei
    Yan, Maosong
    Wu, Jie
    Zhu, Zhaoju
    [J]. COGNITIVE COMPUTATION AND SYSTEMS, 2024, 6 (1-3) : 1 - 11
  • [9] Exploring temporal representations by leveraging attention-based bidirectional LSTM-RNNs for multi-modal emotion recognition
    Li, Chao
    Bao, Zhongtian
    Li, Linhao
    Zhao, Ziping
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2020, 57 (03)
  • [10] Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review
    Zhang, Jianhua
    Yin, Zhong
    Chen, Peng
    Nichele, Stefano
    [J]. INFORMATION FUSION, 2020, 59 : 103 - 126