Explainable AI toward understanding the performance of the top three TADPOLE Challenge methods in the forecast of Alzheimer's disease diagnosis

被引:18
|
作者
Hernandez, Monica [1 ]
Ramon-Julvez, Ubaldo [1 ]
Ferraz, Francisco [1 ]
ADNI Consortium
机构
[1] Univ Zaragoza, Aragon Inst Engn Res, Zaragoza, Spain
来源
PLOS ONE | 2022年 / 17卷 / 05期
关键词
DEMENTIA; ACCURACY; MRI;
D O I
10.1371/journal.pone.0264695
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The Alzheimer ' s Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge is the most comprehensive challenge to date with regard to the number of subjects, considered features, and challenge participants. The initial objective of TADPOLE was the identification of the most predictive data, features, and methods for the progression of subjects at risk of developing Alzheimer ' s. The challenge was successful in recognizing tree-based ensemble methods such as gradient boosting and random forest as the best methods for the prognosis of the clinical status in Alzheimer's disease (AD). However, the challenge outcome was limited to which combination of data processing and methods exhibits the best accuracy; hence, it is difficult to determine the contribution of the methods to the accuracy. The quantification of feature importance was globally approached by all the challenge participant methods. In addition, TADPOLE provided general answers that focused on improving performance while ignoring important issues such as interpretability. The purpose of this study is to intensively explore the models of the top three TADPOLE Challenge methods in a common framework for fair comparison. In addition, for these models, the most meaningful features for the prognosis of the clinical status of AD are studied and the contribution of each feature to the accuracy of the methods is quantified. We provide plausible explanations as to why the methods achieve such accuracy, and we investigate whether the methods use information coherent with clinical knowledge. Finally, we approach these issues through the analysis of SHapley Additive exPlanations (SHAP) values, a technique that has recently attracted increasing attention in the field of explainable artificial intelligence (XAI).
引用
收藏
页数:32
相关论文
共 5 条
  • [1] Understanding Alzheimer disease's structural connectivity through explainable AI
    Essemlali, Achraf
    St-Onge, Etienne
    Descoteaux, Maxime
    Jodoin, Pierre-Marc
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 121, 2020, 121 : 217 - 229
  • [2] Advanced interpretable diagnosis of Alzheimer's disease using SECNN-RF framework with explainable AI
    AbdelAziz, Nabil M.
    Said, Wael
    AbdelHafeez, Mohamed M.
    Ali, Asmaa H.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [3] FairAD-XAI: Evaluation Framework for Explainable AI Methods in Alzheimer's Disease Detection with Fairness-in-the-loop
    Quoc-Toan Nguyen
    Linh Le
    Xuan-The Tran
    Do, Thomas
    Lin, Chin-Teng
    COMPANION OF THE 2024 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, UBICOMP COMPANION 2024, 2024, : 870 - 876
  • [4] Evaluating the Performance of Three Classification Methods in Diagnosis of Parkinson's Disease
    Mostafa, Salama A.
    Mustapha, Aida
    Khaleefah, Shihab Hamad
    Ahmad, Mohd Sharifuddin
    Mohammed, Mazin Abed
    RECENT ADVANCES ON SOFT COMPUTING AND DATA MINING (SCDM 2018), 2018, 700 : 43 - 52
  • [5] Advancements in computer-assisted diagnosis of Alzheimer's disease: A comprehensive survey of neuroimaging methods and AI techniques for early detection
    Shanmugavadivel, Kogilavani
    Sathishkumar, V. E.
    Cho, Jaehyuk
    Subramanian, Malliga
    AGEING RESEARCH REVIEWS, 2023, 91