Multi-Modal Physiological Data Fusion for Affect Estimation Using Deep Learning

被引:18
|
作者
Hssayeni, Murtadha D. [1 ]
Ghoraani, Behnaz [1 ]
机构
[1] Florida Atlantic Univ, Dept Comp & Elect Engn & Comp Sci, Boca Raton, FL 33431 USA
来源
IEEE ACCESS | 2021年 / 9卷
基金
美国国家科学基金会;
关键词
Stress; Physiology; Estimation; Deep learning; Feature extraction; Data integration; Brain modeling; Affect estimation; convolutional neural networks; emotion recognition; multi-modal sensor data fusion; regression; stress detection; EMOTION RECOGNITION; NEGATIVE AFFECT;
D O I
10.1109/ACCESS.2021.3055933
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated momentary estimation of positive and negative affects (PA and NA), the basic sense of feeling, can play an essential role in detecting the early signs of mood disorders. Physiological wearable sensors and machine learning have a potential to make such automated and continuous measurements. However, the physiological signals' features that are associated with the subject-reported PA or NA may not be known. In this work, we use data-driven feature extraction based on deep learning to investigate the application of raw physiological signals for estimating PA and NA. Specifically, we propose two multi-modal data fusion methods with deep Convolutional Neural Networks. We use the proposed architecture to estimate PA and NA and also classify baseline, stress, and amusement emotions. The training and evaluation of the methods are performed using four physiological and one chest motion signal modalities collected using a chest sensing unit from 15 subjects. Overall, our proposed model performed better than traditional machine learning on hand-crafted features. Utilizing only two modalities, our proposed model estimated PA with a correlation of 0.69 (p < 0.05) vs. 0.59 (p < 0.05) with traditional machine learning. These correlations were 0.79 (p < 0.05) vs. 0.73 (p < 0.05) for NA estimation. The best emotion classification was achieved by the traditional method with 79% F1-score and 80% accuracy when all the four physiological modalities are used. This is while with only two modalities, the deep learning achieved 78% F1-score and 79% accuracy.
引用
收藏
页码:21642 / 21652
页数:11
相关论文
共 50 条
  • [1] Memory based fusion for multi-modal deep learning
    Priyasad, Darshana
    Fernando, Tharindu
    Denman, Simon
    Sridharan, Sridha
    Fookes, Clinton
    INFORMATION FUSION, 2021, 67 : 136 - 146
  • [2] Cardiovascular disease detection based on deep learning and multi-modal data fusion
    Zhu, Jiayuan
    Liu, Hui
    Liu, Xiaowei
    Chen, Chao
    Shu, Minglei
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 99
  • [3] Multi-modal data clustering using deep learning: A systematic review
    Raya, Sura
    Orabi, Mariam
    Afyouni, Imad
    Al Aghbari, Zaher
    NEUROCOMPUTING, 2024, 607
  • [4] Deep Multi-lnstance Learning Using Multi-Modal Data for Diagnosis for Lymphocytosis
    Sahasrabudhe, Mihir
    Sujobert, Pierre
    Zacharaki, Evangelia, I
    Maurin, Eugenie
    Grange, Beatrice
    Jallades, Laurent
    Paragios, Nikos
    Vakalopoulou, Maria
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (06) : 2125 - 2136
  • [5] Deep Collaborative Multi-Modal Learning for Unsupervised Kinship Estimation
    Dong, Guan-Nan
    Pun, Chi-Man
    Zhang, Zheng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 4197 - 4210
  • [6] Deep Fusion for Multi-Modal 6D Pose Estimation
    Lin, Shifeng
    Wang, Zunran
    Zhang, Shenghao
    Ling, Yonggen
    Yang, Chenguang
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (04) : 6540 - 6549
  • [7] Prediction of crime occurrence from multi-modal data using deep learning
    Kang, Hyeon-Woo
    Kang, Hang-Bong
    PLOS ONE, 2017, 12 (04):
  • [8] Detecting glaucoma from multi-modal data using probabilistic deep learning
    Huang, Xiaoqin
    Sun, Jian
    Gupta, Krati
    Montesano, Giovanni
    Crabb, David P.
    Garway-Heath, David F.
    Brusini, Paolo
    Lanzetta, Paolo
    Oddone, Francesco
    Turpin, Andrew
    McKendrick, Allison M.
    Johnson, Chris A.
    Yousefi, Siamak
    FRONTIERS IN MEDICINE, 2022, 9
  • [9] Soft multi-modal data fusion
    Coppock, S
    Mazack, L
    PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1 AND 2, 2003, : 636 - 641
  • [10] Multi-modal data fusion: A description
    Coppock, S
    Mazlack, LJ
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2004, 3214 : 1136 - 1142