Multi-Modal Physiological Data Fusion for Affect Estimation Using Deep Learning

被引:18
|
作者
Hssayeni, Murtadha D. [1 ]
Ghoraani, Behnaz [1 ]
机构
[1] Florida Atlantic Univ, Dept Comp & Elect Engn & Comp Sci, Boca Raton, FL 33431 USA
来源
IEEE ACCESS | 2021年 / 9卷
基金
美国国家科学基金会;
关键词
Stress; Physiology; Estimation; Deep learning; Feature extraction; Data integration; Brain modeling; Affect estimation; convolutional neural networks; emotion recognition; multi-modal sensor data fusion; regression; stress detection; EMOTION RECOGNITION; NEGATIVE AFFECT;
D O I
10.1109/ACCESS.2021.3055933
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated momentary estimation of positive and negative affects (PA and NA), the basic sense of feeling, can play an essential role in detecting the early signs of mood disorders. Physiological wearable sensors and machine learning have a potential to make such automated and continuous measurements. However, the physiological signals' features that are associated with the subject-reported PA or NA may not be known. In this work, we use data-driven feature extraction based on deep learning to investigate the application of raw physiological signals for estimating PA and NA. Specifically, we propose two multi-modal data fusion methods with deep Convolutional Neural Networks. We use the proposed architecture to estimate PA and NA and also classify baseline, stress, and amusement emotions. The training and evaluation of the methods are performed using four physiological and one chest motion signal modalities collected using a chest sensing unit from 15 subjects. Overall, our proposed model performed better than traditional machine learning on hand-crafted features. Utilizing only two modalities, our proposed model estimated PA with a correlation of 0.69 (p < 0.05) vs. 0.59 (p < 0.05) with traditional machine learning. These correlations were 0.79 (p < 0.05) vs. 0.73 (p < 0.05) for NA estimation. The best emotion classification was achieved by the traditional method with 79% F1-score and 80% accuracy when all the four physiological modalities are used. This is while with only two modalities, the deep learning achieved 78% F1-score and 79% accuracy.
引用
收藏
页码:21642 / 21652
页数:11
相关论文
共 50 条
  • [31] Robust Deep Multi-modal Learning Based on Gated Information Fusion Network
    Kim, Jaekyum
    Koh, Junho
    Kim, Yecheol
    Choi, Jaehyung
    Hwang, Youngbae
    Choi, Jun Won
    COMPUTER VISION - ACCV 2018, PT IV, 2019, 11364 : 90 - 106
  • [32] A Comprehensive Survey on Deep Learning Multi-Modal Fusion: Methods, Technologies and Applications
    Jiao, Tianzhe
    Guo, Chaopeng
    Feng, Xiaoyue
    Chen, Yuming
    Song, Jie
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 80 (01): : 1 - 35
  • [33] Multi-Modal Data Fusion for Big Events
    Papacharalapous, A. E.
    Hovelynck, Stefan
    Cats, O.
    Lankhaar, J. W.
    Daamen, W.
    van Oort, N.
    van Lint, J. W. C.
    IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2015, 7 (04) : 5 - 10
  • [34] Pedestrian Facial Attention Detection Using Deep Fusion and Multi-Modal Fusion Classifier
    Lian, Jing
    Wang, Zhenghao
    Yang, Dongfang
    Zheng, Wen
    Li, Linhui
    Zhang, Yibin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (01) : 967 - 980
  • [35] Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion
    Younis, Eman M. G.
    Zaki, Someya Mohsen
    Kanjo, Eiman
    Houssein, Essam H.
    SENSORS, 2022, 22 (15)
  • [36] Pedestrian Facial Attention Detection Using Deep Fusion and Multi-modal Fusion Classifier
    Lian, Jing
    Wang, Zhenghao
    Yang, Dongfang
    Zheng, Wen
    Li, Linhui
    Zhang, Yibin
    IEEE Transactions on Circuits and Systems for Video Technology, 2024,
  • [37] Deep multi-modal data analysis and fusion for robust scene understanding in CAVs
    Papandreou, Andreas
    Kloukiniotis, Andreas
    Lalos, Aris
    Moustakas, Konstantinos
    IEEE MMSP 2021: 2021 IEEE 23RD INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2021,
  • [38] Multi-Modal Medical Image Fusion Using Transfer Learning Approach
    Kalamkar, Shrida
    Mary, Geetha A.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (12) : 483 - 488
  • [39] Multi-modal deep fusion for bridge condition assessment
    Momtaz M.
    Li T.
    Harris D.K.
    Lattanzi D.
    Journal of Infrastructure Intelligence and Resilience, 2023, 2 (04):
  • [40] Multi-modal Fusion
    Liu, Huaping
    Hussain, Amir
    Wang, Shuliang
    INFORMATION SCIENCES, 2018, 432 : 462 - 462