Multi-Modal Physiological Data Fusion for Affect Estimation Using Deep Learning

被引:18
|
作者
Hssayeni, Murtadha D. [1 ]
Ghoraani, Behnaz [1 ]
机构
[1] Florida Atlantic Univ, Dept Comp & Elect Engn & Comp Sci, Boca Raton, FL 33431 USA
来源
IEEE ACCESS | 2021年 / 9卷
基金
美国国家科学基金会;
关键词
Stress; Physiology; Estimation; Deep learning; Feature extraction; Data integration; Brain modeling; Affect estimation; convolutional neural networks; emotion recognition; multi-modal sensor data fusion; regression; stress detection; EMOTION RECOGNITION; NEGATIVE AFFECT;
D O I
10.1109/ACCESS.2021.3055933
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated momentary estimation of positive and negative affects (PA and NA), the basic sense of feeling, can play an essential role in detecting the early signs of mood disorders. Physiological wearable sensors and machine learning have a potential to make such automated and continuous measurements. However, the physiological signals' features that are associated with the subject-reported PA or NA may not be known. In this work, we use data-driven feature extraction based on deep learning to investigate the application of raw physiological signals for estimating PA and NA. Specifically, we propose two multi-modal data fusion methods with deep Convolutional Neural Networks. We use the proposed architecture to estimate PA and NA and also classify baseline, stress, and amusement emotions. The training and evaluation of the methods are performed using four physiological and one chest motion signal modalities collected using a chest sensing unit from 15 subjects. Overall, our proposed model performed better than traditional machine learning on hand-crafted features. Utilizing only two modalities, our proposed model estimated PA with a correlation of 0.69 (p < 0.05) vs. 0.59 (p < 0.05) with traditional machine learning. These correlations were 0.79 (p < 0.05) vs. 0.73 (p < 0.05) for NA estimation. The best emotion classification was achieved by the traditional method with 79% F1-score and 80% accuracy when all the four physiological modalities are used. This is while with only two modalities, the deep learning achieved 78% F1-score and 79% accuracy.
引用
收藏
页码:21642 / 21652
页数:11
相关论文
共 50 条
  • [21] Small Object Detection Technology Using Multi-Modal Data Based on Deep Learning
    Park, Chi-Won
    Seo, Yuri
    Sun, Teh-Jen
    Lee, Ga-Won
    Huh, Eui-Nam
    2023 INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN, 2023, : 420 - 422
  • [22] Deep Gated Multi-modal Learning: In-hand Object Pose Changes Estimation using Tactile and Image Data
    Anzai, Tomoki
    Takahashi, Iyuki
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 9361 - 9368
  • [23] A Unified Deep Learning Framework for Multi-Modal Multi-Dimensional Data
    Xi, Pengcheng
    Goubran, Rafik
    Shu, Chang
    2019 IEEE INTERNATIONAL SYMPOSIUM ON MEDICAL MEASUREMENTS AND APPLICATIONS (MEMEA), 2019,
  • [24] On Multi-modal Fusion Learning in constraint propagation
    Li, Yaoyi
    Lu, Hongtao
    INFORMATION SCIENCES, 2018, 462 : 204 - 217
  • [25] Electromagnetic signal feature fusion and recognition based on multi-modal deep learning
    Hou C.
    Zhang X.
    Chen X.
    International Journal of Performability Engineering, 2020, 16 (06): : 941 - 949
  • [26] Multi-modal Fusion Brain Tumor Detection Method Based on Deep Learning
    Yao Hong-ge
    Shen Xin-xia
    Li Yu
    Yu Jun
    Lei Song-ze
    ACTA PHOTONICA SINICA, 2019, 48 (07)
  • [27] Deep learning supported breast cancer classification with multi-modal image fusion
    Hamdy, Eman
    Zaghloul, Mohamed Saad
    Badawy, Osama
    2021 22ND INTERNATIONAL ARAB CONFERENCE ON INFORMATION TECHNOLOGY (ACIT), 2021, : 319 - 325
  • [28] Deep Learning Based Multi-Modal Fusion Architectures for Maritime Vessel Detection
    Farahnakian, Fahimeh
    Heikkonen, Jukka
    REMOTE SENSING, 2020, 12 (16)
  • [29] Classifying Excavator Operations with Fusion Network of Multi-modal Deep Learning Models
    Kim, Jin-Young
    Cho, Sung-Bae
    14TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING MODELS IN INDUSTRIAL AND ENVIRONMENTAL APPLICATIONS (SOCO 2019), 2020, 950 : 25 - 34
  • [30] Deep-Learning-Based Multi-Modal Fusion for Fast MR Reconstruction
    Xiang, Lei
    Chen, Yong
    Chang, Weitang
    Zhan, Yiqiang
    Lin, Weili
    Wang, Qian
    Shen, Dinggang
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2019, 66 (07) : 2105 - 2114