Multi-Modal Physiological Data Fusion for Affect Estimation Using Deep Learning

被引:18
|
作者
Hssayeni, Murtadha D. [1 ]
Ghoraani, Behnaz [1 ]
机构
[1] Florida Atlantic Univ, Dept Comp & Elect Engn & Comp Sci, Boca Raton, FL 33431 USA
来源
IEEE ACCESS | 2021年 / 9卷
基金
美国国家科学基金会;
关键词
Stress; Physiology; Estimation; Deep learning; Feature extraction; Data integration; Brain modeling; Affect estimation; convolutional neural networks; emotion recognition; multi-modal sensor data fusion; regression; stress detection; EMOTION RECOGNITION; NEGATIVE AFFECT;
D O I
10.1109/ACCESS.2021.3055933
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated momentary estimation of positive and negative affects (PA and NA), the basic sense of feeling, can play an essential role in detecting the early signs of mood disorders. Physiological wearable sensors and machine learning have a potential to make such automated and continuous measurements. However, the physiological signals' features that are associated with the subject-reported PA or NA may not be known. In this work, we use data-driven feature extraction based on deep learning to investigate the application of raw physiological signals for estimating PA and NA. Specifically, we propose two multi-modal data fusion methods with deep Convolutional Neural Networks. We use the proposed architecture to estimate PA and NA and also classify baseline, stress, and amusement emotions. The training and evaluation of the methods are performed using four physiological and one chest motion signal modalities collected using a chest sensing unit from 15 subjects. Overall, our proposed model performed better than traditional machine learning on hand-crafted features. Utilizing only two modalities, our proposed model estimated PA with a correlation of 0.69 (p < 0.05) vs. 0.59 (p < 0.05) with traditional machine learning. These correlations were 0.79 (p < 0.05) vs. 0.73 (p < 0.05) for NA estimation. The best emotion classification was achieved by the traditional method with 79% F1-score and 80% accuracy when all the four physiological modalities are used. This is while with only two modalities, the deep learning achieved 78% F1-score and 79% accuracy.
引用
收藏
页码:21642 / 21652
页数:11
相关论文
共 50 条
  • [41] Multi-modal deep learning for landform recognition
    Du, Lin
    You, Xiong
    Li, Ke
    Meng, Liqiu
    Cheng, Gong
    Xiong, Liyang
    Wang, Guangxia
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2019, 158 : 63 - 75
  • [42] Optimal segmentation and fusion of multi-modal brain images using clustering based deep learning algorithm
    Vijendran A.S.
    Ramasamy K.
    Measurement: Sensors, 2023, 27
  • [43] Deep Multi-modal Learning with Cascade Consensus
    Yang, Yang
    Wu, Yi-Feng
    Zhan, De-Chuan
    Jiang, Yuan
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2018, 11013 : 64 - 72
  • [44] An Ensemble Learning Approach for Multi-Modal Medical Image Fusion using Deep Convolutional Neural Networks
    Maseleno, Andino
    Kavitha, D.
    Ashok, Koudegai
    Ansari, Mohammed Saleh Al
    Satheesh, Nimmati
    Reddy, R. Vijaya Kumar
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (08) : 758 - 769
  • [45] Deep-Learning for Change Detection Using Multi-Modal Fusion of Remote Sensing Images: A Review
    Saidi, Souad
    Idbraim, Soufiane
    Karmoude, Younes
    Masse, Antoine
    Arbelo, Manuel
    REMOTE SENSING, 2024, 16 (20)
  • [46] Multi-modal deep distance metric learning
    Roostaiyan, Seyed Mahdi
    Imani, Ehsan
    Baghshah, Mahdieh Soleymani
    INTELLIGENT DATA ANALYSIS, 2017, 21 (06) : 1351 - 1369
  • [47] Granular estimation of user cognitive workload using multi-modal physiological sensors
    Wang, Jingkun
    Stevens, Christopher
    Bennett, Winston
    Yu, Denny
    FRONTIERS IN NEUROERGONOMICS, 2024, 5
  • [48] Multi-Modal Data Fusion Using Deep Neural Network for Condition Monitoring of High Voltage Insulator
    Mussina, Damira
    Irmanova, Aidana
    Jamwal, Prashant K.
    Bagheri, Mehdi
    IEEE ACCESS, 2020, 8 : 184486 - 184496
  • [49] Multi-modal deep learning for credit rating prediction using text and numerical data streams
    Tavakoli, Mahsa
    Chandra, Rohitash
    Tian, Fengrui
    Bravo, Cristian
    APPLIED SOFT COMPUTING, 2025, 171
  • [50] Multi-modal topic modeling from social media data using deep transfer learning
    Rani, Seema
    Kumar, Mukesh
    APPLIED SOFT COMPUTING, 2024, 160