Deep Multi-Modal Discriminative and Interpretability Network for Alzheimer's Disease Diagnosis

被引:20
|
作者
Zhu, Qi [1 ]
Xu, Bingliang [1 ]
Huang, Jiashuang [2 ]
Wang, Heyang [1 ]
Xu, Ruting [1 ]
Shao, Wei [1 ]
Zhang, Daoqiang [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 210016, Peoples R China
[2] Nantong Univ, Sch Informat Sci & Technol, Nantong 226019, Peoples R China
基金
中国国家自然科学基金;
关键词
Correlation; Deep learning; Brain modeling; Magnetic resonance imaging; Analytical models; Data models; Biomarkers; Multi-modal discriminative representation; block-diagonal constraint; generalized canonical correlation analysis; knowledge distillation; Alzheimer's disease; CLASSIFICATION;
D O I
10.1109/TMI.2022.3230750
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Multi-modal fusion has become an important data analysis technology in Alzheimer's disease (AD) diagnosis, which is committed to effectively extract and utilize complementary information among different modalities. However, most of the existing fusion methods focus on pursuing common feature representation by transformation, and ignore discriminative structural information among samples. In addition, most fusion methods use high-order feature extraction, such as deep neural network, by which it is difficult to identify biomarkers. In this paper, we propose a novel method named deep multi-modal discriminative and interpretability network (DMDIN), which aligns samples in a discriminative common space and provides a new approach to identify significant brain regions (ROIs) in AD diagnosis. Specifically, we reconstruct each modality with a hierarchical representation through multilayer perceptron (MLP), and take advantage of the shared self-expression coefficients constrained by diagonal blocks to embed the structural information of inter-class and the intra-class. Further, the generalized canonical correlation analysis (GCCA) is adopted as a correlation constraint to generate a discriminative common space, in which samples of the same category gather while samples of different categories stay away. Finally, in order to enhance the interpretability of the deep learning model, we utilize knowledge distillation to reproduce coordinated representations and capture influence of brain regions in AD classification. Experiments show that the proposed method performs better than several state-of-the-art methods in AD diagnosis.
引用
收藏
页码:1472 / 1483
页数:12
相关论文
共 50 条
  • [41] Re-transfer learning and multi-modal learning assisted early diagnosis of Alzheimer's disease
    Fang, Meie
    Jin, Zhuxin
    Qin, Feiwei
    Peng, Yong
    Jiang, Chao
    Pan, Zhigeng
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (20) : 29159 - 29175
  • [42] Multi-modal Neuroimaging Data Fusion via Latent Space Learning for Alzheimer's Disease Diagnosis
    Zhou, Tao
    Thung, Kim-Han
    Liu, Mingxia
    Shi, Feng
    Zhang, Changqing
    Shen, Dinggang
    PREDICTIVE INTELLIGENCE IN MEDICINE, 2018, 11121 : 76 - 84
  • [43] A Multi-modal Data Platform for Diagnosis and Prediction of Alzheimer's Disease Using Machine Learning Methods
    Pang, Zhen
    Wang, Xiang
    Wang, Xulong
    Qi, Jun
    Zhao, Zhong
    Gao, Yuan
    Yang, Yun
    Yang, Po
    MOBILE NETWORKS & APPLICATIONS, 2021, 26 (06): : 2341 - 2352
  • [44] Alzheimer's disease diagnosis from multi-modal data via feature inductive learning and dual multilevel graph neural network
    Lei, Baiying
    Li, Yafeng
    Fu, Wanyi
    Yang, Peng
    Chen, Shaobin
    Wang, Tianfu
    Xiao, Xiaohua
    Niu, Tianye
    Fu, Yu
    Wang, Shuqiang
    Han, Hongbin
    Qin, Jing
    MEDICAL IMAGE ANALYSIS, 2024, 97
  • [45] A Multi-Modal Deep Learning Approach to the Early Prediction of Mild Cognitive Impairment Conversion to Alzheimer's Disease
    Rana, Sijan S.
    Ma, Xinhui
    Pang, Wei
    Wolverson, Emma
    2020 IEEE/ACM INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING, APPLICATIONS AND TECHNOLOGIES (BDCAT 2020), 2020, : 9 - 18
  • [46] Joint Multi-Modal Longitudinal Regression and Classification for Alzheimer's Disease Prediction
    Brand, Lodewijk
    Nichols, Kai
    Wang, Hua
    Shen, Li
    Huang, Heng
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (06) : 1845 - 1855
  • [47] Multi-modal classification of Alzheimer's disease using nonlinear graph fusion
    Tong, Tong
    Gray, Katherine
    Gao, Qinquan
    Chen, Liang
    Rueckert, Daniel
    PATTERN RECOGNITION, 2017, 63 : 171 - 181
  • [48] Longitudinal and Multi-Modal Data Learning for Parkinson's Disease Diagnosis
    Huang, Zhongwei
    Lei, Haijun
    Zhao, Yujia
    Zhou, Feng
    Yan, Jin
    Elazab, Ahmed
    Lei, Baiying
    2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018), 2018, : 1411 - 1414
  • [49] Interoperable Multi-Modal Data Analysis Platform for Alzheimer's Disease Management
    Pang, Zhen
    Zhang, Shuhao
    Yang, Yun
    Qi, Jun
    Yang, Po
    2020 IEEE INTL SYMP ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, INTL CONF ON BIG DATA & CLOUD COMPUTING, INTL SYMP SOCIAL COMPUTING & NETWORKING, INTL CONF ON SUSTAINABLE COMPUTING & COMMUNICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2020), 2020, : 1321 - 1327
  • [50] Alzheimer's disease classification method based on multi-modal medical images
    Han K.
    Pan H.
    Zhang W.
    Bian X.
    Chen C.
    He S.
    Qinghua Daxue Xuebao/Journal of Tsinghua University, 2020, 60 (08): : 664 - 671and682