Cardiovascular disease detection based on deep learning and multi-modal data fusion

被引:2
|
作者
Zhu, Jiayuan [1 ]
Liu, Hui [1 ]
Liu, Xiaowei [1 ]
Chen, Chao [1 ]
Shu, Minglei [1 ]
机构
[1] Qilu Univ Technol, Shandong Artificial Intelligence Inst, Shandong Acad Sci, Jinan 250014, Peoples R China
关键词
Data fusion; ECG; PCG; Deep multi-scale network; SVM-RFECV; Feature selection; ECG; SELECTION;
D O I
10.1016/j.bspc.2024.106882
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Electrocardiogram (ECG) and phonocardiogram (PCG) are widely used for early prevention and diagnosis of cardiovascular diseases (CVDs) because they accurately reflect the state of the heart from different perspectives and can be conveniently collected in a non-invasive manner. However, there are few studies using both ECG and PCG for CVD detection, and extracting discriminative features without losing useful information is challenging. In this study, we propose a dual-scale deep residual network (DDR-Net) to automatically extract the features from raw PCG and ECG signals respectively. A dual-scale feature aggregation module is used to integrate low-level features at different scales. We employ SVM-RFECV to select important features and use SVM for the final classification. The proposed method was evaluated on the "training-a"set of 2016 PhysioNet/CinC Challenge database. The experimental results show that the performance of our method is better than that of methods using only ECG or PCG as well as existing multi-modal studies, yielding an accuracy of 91.6% and an AUC value of 0.962. Feature importance of ECG and PCG for CVD detection is analyzed.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] Survey on Deep Multi-modal Data Analytics: Collaboration, Rivalry, and Fusion
    Wang, Yang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (01)
  • [42] Pedestrian Facial Attention Detection Using Deep Fusion and Multi-Modal Fusion Classifier
    Lian, Jing
    Wang, Zhenghao
    Yang, Dongfang
    Zheng, Wen
    Li, Linhui
    Zhang, Yibin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (01) : 967 - 980
  • [43] Pedestrian Facial Attention Detection Using Deep Fusion and Multi-modal Fusion Classifier
    Lian, Jing
    Wang, Zhenghao
    Yang, Dongfang
    Zheng, Wen
    Li, Linhui
    Zhang, Yibin
    IEEE Transactions on Circuits and Systems for Video Technology, 2024,
  • [44] Attention-based multi-modal fusion sarcasm detection
    Liu, Jing
    Tian, Shengwei
    Yu, Long
    Long, Jun
    Zhou, Tiejun
    Wang, Bo
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 44 (02) : 2097 - 2108
  • [45] Balanced Multi-modal Learning with Hierarchical Fusion for Fake News Detection
    Wu, Fei
    Chen, Shu
    Gao, Guangwei
    Ji, Yimu
    Jing, Xiao-Yuan
    PATTERN RECOGNITION, 2025, 164
  • [46] Learning Adaptive Fusion Bank for Multi-Modal Salient Object Detection
    Wang, Kunpeng
    Tu, Zhengzheng
    Li, Chenglong
    Zhang, Cheng
    Luo, Bin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 7344 - 7358
  • [47] Multi-modal cascade detection of pipeline defects based on deep transfer metric learning
    Gao, Boxuan
    Zhao, Hong
    Miao, Xingyuan
    ENGINEERING FAILURE ANALYSIS, 2024, 160
  • [48] Deep-Learning for Change Detection Using Multi-Modal Fusion of Remote Sensing Images: A Review
    Saidi, Souad
    Idbraim, Soufiane
    Karmoude, Younes
    Masse, Antoine
    Arbelo, Manuel
    REMOTE SENSING, 2024, 16 (20)
  • [49] Disease Classification Model Based on Multi-Modal Feature Fusion
    Wan, Zhengyu
    Shao, Xinhui
    IEEE ACCESS, 2023, 11 : 27536 - 27545
  • [50] A Unified Deep Learning Framework for Multi-Modal Multi-Dimensional Data
    Xi, Pengcheng
    Goubran, Rafik
    Shu, Chang
    2019 IEEE INTERNATIONAL SYMPOSIUM ON MEDICAL MEASUREMENTS AND APPLICATIONS (MEMEA), 2019,