Deep Multi-Modal Discriminative and Interpretability Network for Alzheimer's Disease Diagnosis

被引:20
|
作者
Zhu, Qi [1 ]
Xu, Bingliang [1 ]
Huang, Jiashuang [2 ]
Wang, Heyang [1 ]
Xu, Ruting [1 ]
Shao, Wei [1 ]
Zhang, Daoqiang [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 210016, Peoples R China
[2] Nantong Univ, Sch Informat Sci & Technol, Nantong 226019, Peoples R China
基金
中国国家自然科学基金;
关键词
Correlation; Deep learning; Brain modeling; Magnetic resonance imaging; Analytical models; Data models; Biomarkers; Multi-modal discriminative representation; block-diagonal constraint; generalized canonical correlation analysis; knowledge distillation; Alzheimer's disease; CLASSIFICATION;
D O I
10.1109/TMI.2022.3230750
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Multi-modal fusion has become an important data analysis technology in Alzheimer's disease (AD) diagnosis, which is committed to effectively extract and utilize complementary information among different modalities. However, most of the existing fusion methods focus on pursuing common feature representation by transformation, and ignore discriminative structural information among samples. In addition, most fusion methods use high-order feature extraction, such as deep neural network, by which it is difficult to identify biomarkers. In this paper, we propose a novel method named deep multi-modal discriminative and interpretability network (DMDIN), which aligns samples in a discriminative common space and provides a new approach to identify significant brain regions (ROIs) in AD diagnosis. Specifically, we reconstruct each modality with a hierarchical representation through multilayer perceptron (MLP), and take advantage of the shared self-expression coefficients constrained by diagonal blocks to embed the structural information of inter-class and the intra-class. Further, the generalized canonical correlation analysis (GCCA) is adopted as a correlation constraint to generate a discriminative common space, in which samples of the same category gather while samples of different categories stay away. Finally, in order to enhance the interpretability of the deep learning model, we utilize knowledge distillation to reproduce coordinated representations and capture influence of brain regions in AD classification. Experiments show that the proposed method performs better than several state-of-the-art methods in AD diagnosis.
引用
收藏
页码:1472 / 1483
页数:12
相关论文
共 50 条
  • [31] Relation-Induced Multi-Modal Shared Representation Learning for Alzheimer's Disease Diagnosis
    Ning, Zhenyuan
    Xiao, Qing
    Feng, Qianjin
    Chen, Wufan
    Zhang, Yu
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (06) : 1632 - 1645
  • [32] BPGAN: Brain PET synthesis from MRI using generative adversarial network for multi-modal Alzheimer's disease diagnosis
    Zhang, Jin
    He, Xiaohai
    Qing, Linbo
    Gao, Feng
    Wang, Bin
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2022, 217
  • [33] Multi-modal imaging genetics data fusion by deep auto-encoder and self-representation network for Alzheimer's disease diagnosis and biomarkers extraction
    Jiao, Cui-Na
    Gao, Ying-Lian
    Ge, Dao-Hui
    Shang, Junliang
    Liu, Jin-Xing
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 130
  • [34] A multi-modal deep neural network for multi-class liver cancer diagnosis
    Khan, Rayyan Azam
    Fu, Minghan
    Burbridge, Brent
    Luo, Yigang
    Wu, Fang-Xiang
    NEURAL NETWORKS, 2023, 165 : 553 - 561
  • [35] Deep Robust Unsupervised Multi-Modal Network
    Yang, Yang
    Wu, Yi-Feng
    Zhan, De-Chuan
    Liu, Zhi-Bin
    Jiang, Yuan
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 5652 - 5659
  • [36] Multi-modal feature selection with anchor graph for Alzheimer's disease
    Li, Jiaye
    Xu, Hang
    Yu, Hao
    Jiang, Zhihao
    Zhu, Lei
    FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [37] Author Correction: Predicting Alzheimer’s disease progression using multi-modal deep learning approach
    Garam Lee
    Kwangsik Nho
    Byungkon Kang
    Kyung-Ah Sohn
    Dokyoon Kim
    Scientific Reports, 13
  • [38] Diagnosis of Alzheimer's Disease by Canonical Correlation Analysis Based Fusion of Multi-Modal Medical Images
    Baninajjar, Anahita
    Soltanian-Zadeh, Hamid
    Rezaie, Sajad
    Mohammadi-Nejad, Ali-Reza
    2020 INTERNATIONAL CONFERENCE ON E-HEALTH AND BIOENGINEERING (EHB), 2020,
  • [39] Re-transfer learning and multi-modal learning assisted early diagnosis of Alzheimer’s disease
    Meie Fang
    Zhuxin Jin
    Feiwei Qin
    Yong Peng
    Chao Jiang
    Zhigeng Pan
    Multimedia Tools and Applications, 2022, 81 : 29159 - 29175
  • [40] A Multi-modal Data Platform for Diagnosis and Prediction of Alzheimer’s Disease Using Machine Learning Methods
    Zhen Pang
    Xiang Wang
    Xulong Wang
    Jun Qi
    Zhong Zhao
    Yuan Gao
    Yun Yang
    Po Yang
    Mobile Networks and Applications, 2021, 26 : 2341 - 2352