Deep Multi-Modal Discriminative and Interpretability Network for Alzheimer's Disease Diagnosis

被引:20
|
作者
Zhu, Qi [1 ]
Xu, Bingliang [1 ]
Huang, Jiashuang [2 ]
Wang, Heyang [1 ]
Xu, Ruting [1 ]
Shao, Wei [1 ]
Zhang, Daoqiang [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 210016, Peoples R China
[2] Nantong Univ, Sch Informat Sci & Technol, Nantong 226019, Peoples R China
基金
中国国家自然科学基金;
关键词
Correlation; Deep learning; Brain modeling; Magnetic resonance imaging; Analytical models; Data models; Biomarkers; Multi-modal discriminative representation; block-diagonal constraint; generalized canonical correlation analysis; knowledge distillation; Alzheimer's disease; CLASSIFICATION;
D O I
10.1109/TMI.2022.3230750
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Multi-modal fusion has become an important data analysis technology in Alzheimer's disease (AD) diagnosis, which is committed to effectively extract and utilize complementary information among different modalities. However, most of the existing fusion methods focus on pursuing common feature representation by transformation, and ignore discriminative structural information among samples. In addition, most fusion methods use high-order feature extraction, such as deep neural network, by which it is difficult to identify biomarkers. In this paper, we propose a novel method named deep multi-modal discriminative and interpretability network (DMDIN), which aligns samples in a discriminative common space and provides a new approach to identify significant brain regions (ROIs) in AD diagnosis. Specifically, we reconstruct each modality with a hierarchical representation through multilayer perceptron (MLP), and take advantage of the shared self-expression coefficients constrained by diagonal blocks to embed the structural information of inter-class and the intra-class. Further, the generalized canonical correlation analysis (GCCA) is adopted as a correlation constraint to generate a discriminative common space, in which samples of the same category gather while samples of different categories stay away. Finally, in order to enhance the interpretability of the deep learning model, we utilize knowledge distillation to reproduce coordinated representations and capture influence of brain regions in AD classification. Experiments show that the proposed method performs better than several state-of-the-art methods in AD diagnosis.
引用
收藏
页码:1472 / 1483
页数:12
相关论文
共 50 条
  • [21] Self-paced Learning for Multi-modal Fusion for Alzheimer's Disease Diagnosis
    Yuan, Ning
    Zhu, Qi
    Guan, Donghai
    Yuan, Weiwei
    2017 INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS (SPAC), 2017, : 70 - 75
  • [22] Sparse Interpretation of Graph Convolutional Networks for Multi-modal Diagnosis of Alzheimer's Disease
    Zhou, Houliang
    Zhang, Yu
    Chen, Brian Y.
    Shen, Li
    He, Lifang
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VIII, 2022, 13438 : 469 - 478
  • [23] Multi-Modal Diagnosis of Alzheimer's Disease Using Interpretable Graph Convolutional Networks
    Zhou, Houliang
    He, Lifang
    Chen, Brian Y.
    Shen, Li
    Zhang, Yu
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2025, 44 (01) : 142 - 153
  • [24] Patch-based deep multi-modal learning framework for Alzheimer's disease diagnosis using multi-view neuroimaging
    Liu, Fangyu
    Yuan, Shizhong
    Li, Weimin
    Xu, Qun
    Sheng, Bin
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 80
  • [25] Multi-Modal Deep Learning Diagnosis of Parkinson's Disease-A Systematic Review
    Skaramagkas, Vasileios
    Pentari, Anastasia
    Kefalopoulou, Zinovia
    Tsiknakis, Manolis
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2023, 31 : 2399 - 2423
  • [26] Interpretability of deep neural networks used for the diagnosis of Alzheimer's disease
    Pohl, Tomas
    Jakab, Marek
    Benesova, Wanda
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2022, 32 (02) : 673 - 686
  • [27] Multi-Modal Fusion and Longitudinal Analysis for Alzheimer's Disease Classification Using Deep Learning
    Muksimova, Shakhnoza
    Umirzakova, Sabina
    Baltayev, Jushkin
    Cho, Young Im
    DIAGNOSTICS, 2025, 15 (06)
  • [28] Multi-Modal Deep Learning Models for Alzheimer's Disease Prediction Using MRI and EHR
    Prabhu, Sathvik S.
    Berkebile, John A.
    Rajagopalan, Neha
    Yao, Renjie
    Shi, Wenqi
    Giuste, Felipe
    Zhong, Yishan
    Sun, Jimin
    Wang, May D.
    2022 IEEE 22ND INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (BIBE 2022), 2022, : 168 - 173
  • [29] Multi-modal neuroimaging feature selection with consistent metric constraint for diagnosis of Alzheimer's disease
    Hao, Xiaoke
    Bao, Yongjin
    Guo, Yingchun
    Yu, Ming
    Zhang, Daoqiang
    Risacher, Shannon L.
    Saykin, Andrew J.
    Yao, Xiaohui
    Shen, Li
    MEDICAL IMAGE ANALYSIS, 2020, 60
  • [30] Vector Quantized Multi-modal Guidance for Alzheimer's Disease Diagnosis Based on Feature Imputation
    Zhang, Yuanwang
    Sun, Kaicong
    Liu, Yuxiao
    Ou, Zaixin
    Shen, Dinggang
    MACHINE LEARNING IN MEDICAL IMAGING, MLMI 2023, PT I, 2024, 14348 : 403 - 412