Explainable Artificial Intelligence in Alzheimer’s Disease Classification: A Systematic Review

被引:0
|
作者
Vimbi Viswan
Noushath Shaffi
Mufti Mahmud
Karthikeyan Subramanian
Faizal Hajamohideen
机构
[1] University of Technology and Applied Sciences,College of Computing and Information Sciences
[2] Nottingham Trent University,Department of Computer Science
[3] Nottingham Trent University,Medical Technologies Innovation Facility
[4] Nottingham Trent University,Computing and Informatics Research Centre
来源
Cognitive Computation | 2024年 / 16卷
关键词
Alzheimer’s Disease Classification; Ante-hoc; Blackbox Models; Explainable Artificial Intelligence; Intrepretable Machine Learning; Model-Agnostic; Model-Specific; Post-hoc; XAI;
D O I
暂无
中图分类号
学科分类号
摘要
The unprecedented growth of computational capabilities in recent years has allowed Artificial Intelligence (AI) models to be developed for medical applications with remarkable results. However, a large number of Computer Aided Diagnosis (CAD) methods powered by AI have limited acceptance and adoption in the medical domain due to the typical blackbox nature of these AI models. Therefore, to facilitate the adoption of these AI models among the medical practitioners, the models' predictions must be explainable and interpretable. The emerging field of explainable AI (XAI) aims to justify the trustworthiness of these models' predictions. This work presents a systematic review of the literature reporting Alzheimer's disease (AD) detection using XAI that were communicated during the last decade. Research questions were carefully formulated to categorise AI models into different conceptual approaches (e.g., Post-hoc, Ante-hoc, Model-Agnostic, Model-Specific, Global, Local etc.) and frameworks (Local Interpretable Model-Agnostic Explanation or LIME, SHapley Additive exPlanations or SHAP, Gradient-weighted Class Activation Mapping or GradCAM, Layer-wise Relevance Propagation or LRP, etc.) of XAI. This categorisation provides broad coverage of the interpretation spectrum from intrinsic (e.g., Model-Specific, Ante-hoc models) to complex patterns (e.g., Model-Agnostic, Post-hoc models) and by taking local explanations to a global scope. Additionally, different forms of interpretations providing in-depth insight into the factors that support the clinical diagnosis of AD are also discussed. Finally, limitations, needs and open challenges of XAI research are outlined with possible prospects of their usage in AD detection.
引用
收藏
页码:1 / 44
页数:43
相关论文
共 50 条
  • [1] Explainable Artificial Intelligence in Alzheimer's Disease Classification: A Systematic Review
    Viswan, Vimbi
    Shaffi, Noushath
    Mahmud, Mufti
    Subramanian, Karthikeyan
    Hajamohideen, Faizal
    [J]. COGNITIVE COMPUTATION, 2024, 16 (01) : 1 - 44
  • [2] The contribution of Artificial Intelligence for the Diagnosis of Alzheimer's Disease in PET: Systematic Review
    Ferreira, P.
    Afonso, F.
    Saraiva, J.
    Figueiredo, S.
    Vieira, L.
    [J]. EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, 2021, 48 (SUPPL 1) : S503 - S503
  • [3] Explainable Artificial Intelligence in the Medical Domain: A Systematic Review
    Chakrobartty, Shuvro
    El-Gayar, Omar
    [J]. DIGITAL INNOVATION AND ENTREPRENEURSHIP (AMCIS 2021), 2021,
  • [4] Explainable artificial intelligence (XAI) in finance: a systematic literature review
    Cerneviciene, Jurgita
    Kabasinskas, Audrius
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (08)
  • [5] Radiomics and Artificial Intelligence for the Diagnosis and Monitoring of Alzheimer's Disease: A Systematic Review of Studies in the Field
    Bevilacqua, Roberta
    Barbarossa, Federico
    Fantechi, Lorenzo
    Fornarelli, Daniela
    Paci, Enrico
    Bolognini, Silvia
    Giammarchi, Cinzia
    Lattanzio, Fabrizia
    Paciaroni, Lucia
    Riccardi, Giovanni Renato
    Pelliccioni, Giuseppe
    Biscetti, Leonardo
    Maranesi, Elvira
    [J]. JOURNAL OF CLINICAL MEDICINE, 2023, 12 (16)
  • [6] Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
    Giuste, Felipe
    Shi, Wenqi
    Zhu, Yuanda
    Naren, Tarun
    Isgut, Monica
    Sha, Ying
    Tong, Li
    Gupte, Mitali
    Wang, May D.
    [J]. IEEE REVIEWS IN BIOMEDICAL ENGINEERING, 2023, 16 : 5 - 21
  • [7] Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review
    Frasca M.
    La Torre D.
    Pravettoni G.
    Cutica I.
    [J]. Discov. Artif. Intell., 2024, 1 (1):
  • [8] Explainable artificial intelligence in skin cancer recognition: A systematic review
    Hauser, Katja
    Kurz, Alexander
    Haggenmueller, Sarah
    Maron, Roman C.
    von Kalle, Christof
    Utikal, Jochen S.
    Meier, Friedegund
    Hobelsberger, Sarah
    Gellrich, Frank F.
    Sergon, Mildred
    Hauschild, Axel
    French, Lars E.
    Heinzerling, Lucie
    Schlager, Justin G.
    Ghoreschi, Kamran
    Schlaak, Max
    Hilke, Franz J.
    Poch, Gabriela
    Kutzner, Heinz
    Berking, Carola
    Heppt, Markus, V
    Erdmann, Michael
    Haferkamp, Sebastian
    Schadendorf, Dirk
    Sondermann, Wiebke
    Goebeler, Matthias
    Schilling, Bastian
    Kather, Jakob N.
    Froehling, Stefan
    Lipka, Daniel B.
    Hekler, Achim
    Krieghoff-Henning, Eva
    Brinker, Titus J.
    [J]. EUROPEAN JOURNAL OF CANCER, 2022, 167 : 54 - 69
  • [9] Artificial Intelligence, Speech, and Language Processing Approaches to Monitoring Alzheimer's Disease: A Systematic Review
    Garcia, Sofia de la Fuente
    Ritchie, Craig W.
    Luz, Saturnino
    [J]. JOURNAL OF ALZHEIMERS DISEASE, 2020, 78 (04) : 1547 - 1574
  • [10] Artificial Intelligence for Caregivers of Persons With Alzheimer's Disease and Related Dementias: Systematic Literature Review
    Xie, Bo
    Tao, Cui
    Li, Juan
    Hilsabeck, Robin C.
    Aguirre, Alyssa
    [J]. JMIR MEDICAL INFORMATICS, 2020, 8 (08)