A quantitatively interpretable model for Alzheimer's disease prediction using deep counterfactuals

被引:0
|
作者
Oh, Kwanseok [1 ]
Heo, Da-Woon [1 ]
Mulyadi, Ahmad Wisnu [2 ]
Jung, Wonsik [2 ]
Kang, Eunsong
Lee, Kun Ho [3 ,4 ,5 ]
Suk, Heung-Il [1 ]
机构
[1] Korea Univ, Dept Artificial Intelligence, Seoul 02841, South Korea
[2] Korea Univ, Dept Brain & Cognit Engn, Seoul 02841, South Korea
[3] Chosun Univ, Gwangju Alzheimers & Related Dementia Cohort Res C, Gwangju 61452, South Korea
[4] Chosun Univ, Dept Biomed Sci, Gwangju 61452, South Korea
[5] Korea Brain Res Inst, Daegu 41062, South Korea
关键词
Alzheimer's disease; Counterfactual reasoning; Quantitative feature-based in-depth analysis; Counterfactual-guided attention; MILD COGNITIVE IMPAIRMENT; ATROPHY; MRI; PROGRESSION; NEUROPATHOLOGY; HIPPOCAMPUS; IMAGES; CORTEX; AD;
D O I
10.1016/j.neuroimage.2025.121077
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Deep learning (DL) for predicting Alzheimer's disease (AD) has provided timely intervention in disease progression yet still demands attentive interpretability to explain how their DL models make definitive decisions. Counterfactual reasoning has recently gained increasing attention in medical research because of its ability to provide a refined visual explanatory map. However, such visual explanatory maps based on visual inspection alone are insufficient unless we intuitively demonstrate their medical or neuroscientific validity via quantitative features. In this study, we synthesize the counterfactual-labeled structural MRIs using our proposed framework and transform it into a gray matter density map to measure its volumetric changes over the parcellated region of interest (ROI). We also devised a lightweight linear classifier to boost the effectiveness of constructed ROIs, promoted quantitative interpretation, and achieved comparable predictive performance to DL methods. Throughout this, our framework produces an "AD-relatedness index"for each ROI. It offers an intuitive understanding of brain status for an individual patient and across patient groups concerning AD progression.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Interpretable ensemble deep learning model for early detection of Alzheimer's disease using local interpretable model-agnostic explanations
    Aghaei, Atefe
    Moghaddam, Mohsen Ebrahimi
    Malek, Hamed
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2022, 32 (06) : 1889 - 1902
  • [2] An Explainable Deep Learning Model for Prediction of Severity of Alzheimer's Disease
    Ekuma, Godwin
    Hier, Daniel B.
    Obafemi-Ajayi, Tayo
    2023 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE IN BIOINFORMATICS AND COMPUTATIONAL BIOLOGY, CIBCB, 2023, : 297 - 304
  • [3] Deep recurrent model for individualized prediction of Alzheimer's disease progression
    Jung, Wonsik
    Jun, Eunji
    Suk, Heung-Il
    NEUROIMAGE, 2021, 237
  • [4] Alzheimer's Disease Prediction Using Deep Feature Extraction and Optimization
    Mohammad, Farah
    Al Ahmadi, Saad
    MATHEMATICS, 2023, 11 (17)
  • [5] Anatomically Interpretable Brain Age Prediction in Alzheimer's Disease Using Graph Neural Networks
    Sihag, Saurabh
    Mateos, Gonzalo
    Ribeiro, Alejandro
    McMillan, Corey T.
    ANNALS OF NEUROLOGY, 2023, 94 : S139 - S140
  • [6] An interpretable machine learning model for diagnosis of Alzheimer's disease
    Das, Diptesh
    Ito, Junichi
    Kadowaki, Tadashi
    Tsuda, Koji
    PEERJ, 2019, 7
  • [7] Toward an interpretable Alzheimer's disease diagnostic model with regional abnormality representation via deep learning
    Lee, Eunho
    Choi, Jun-Sik
    Kim, Minjeong
    Suk, Heung-Il
    NEUROIMAGE, 2019, 202
  • [8] Interpretable deep clustering survival machines for Alzheimer's disease subtype discovery
    Hou, Bojian
    Wen, Zixuan
    Bao, Jingxuan
    Zhang, Richard
    Tong, Boning
    Yang, Shu
    Wen, Junhao
    Cui, Yuhan
    Moore, Jason H.
    Saykin, Andrew J.
    Huang, Heng
    Thompson, Paul M.
    Ritchie, Marylyn D.
    Davatzikos, Christos
    Shen, Li
    MEDICAL IMAGE ANALYSIS, 2024, 97
  • [9] An interpretable deep learning framework identifies proteomic drivers of Alzheimer's disease
    Panizza, Elena
    Cerione, Richard A.
    FRONTIERS IN CELL AND DEVELOPMENTAL BIOLOGY, 2024, 12
  • [10] Development and validation of an interpretable deep learning framework for Alzheimer's disease classification
    Qiu, Shangran
    Joshi, Prajakta S.
    Miller, Matthew, I
    Xue, Chonghua
    Zhou, Xiao
    Karjadi, Cody
    Chang, Gary H.
    Joshi, Anant S.
    Dwyer, Brigid
    Zhu, Shuhan
    Kaku, Michelle
    Zhou, Yan
    Alderazi, Yazan J.
    Swaminathan, Arun
    Kedar, Sachin
    Saint-Hilaire, Marie-Helene
    Auerbach, Sanford H.
    Yuan, Jing
    Sartor, E. Alton
    Au, Rhoda
    Kolachalama, Vijaya B.
    BRAIN, 2020, 143 : 1920 - 1933