Towards better interpretable and generalizable AD detection using collective artificial intelligence

被引:3
|
作者
Nguyen, Huy-Dung [1 ]
Clement, Michael [1 ]
Mansencal, Boris [1 ]
Coupe, Pierrick [1 ]
机构
[1] Univ Bordeaux, CNRS, Bordeaux INP, LaBRI,UMR 5800, F-33400 Talence, France
基金
加拿大健康研究院; 美国国家卫生研究院;
关键词
Deep grading; Collective artificial intelligence; Generalization; Alzheimer?s disease classification; Mild cognitive impairment; Graph convolutional network; MILD COGNITIVE IMPAIRMENT; ALZHEIMERS-DISEASE; STRUCTURAL MRI; NEURAL-NETWORK; DEMENTIA; DIAGNOSIS; IMAGES; VOLUME; CLASSIFICATION; INDIVIDUALS;
D O I
10.1016/j.compmedimag.2022.102171
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Alzheimer's Disease is the most common cause of dementia. Accurate diagnosis and prognosis of this disease are essential to design an appropriate treatment plan, increasing the life expectancy of the patient. Intense research has been conducted on the use of machine learning to identify Alzheimer's Disease from neuroimaging data, such as structural magnetic resonance imaging. In recent years, advances of deep learning in computer vision suggest a new research direction for this problem. Current deep learning-based approaches in this field, however, have a number of drawbacks, including the interpretability of model decisions, a lack of generalizability information and a lower performance compared to traditional machine learning techniques. In this paper, we design a two-stage framework to overcome these limitations. In the first stage, an ensemble of 125 U-Nets is used to grade the input image, producing a 3D map that reflects the disease severity at voxel-level. This map can help to localize abnormal brain areas caused by the disease. In the second stage, we model a graph per individual using the generated grading map and other information about the subject. We propose to use a graph convolutional neural network classifier for the final classification. As a result, our framework demonstrates comparative performance to the state-of-the-art methods in different datasets for both diagnosis and prognosis. We also demonstrate that the use of a large ensemble of U-Nets offers a better generalization capacity for our framework.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Towards 'interpretable' artificial intelligence for dermatology
    Rotemberg, V.
    Halpern, A.
    [J]. BRITISH JOURNAL OF DERMATOLOGY, 2019, 181 (01) : 5 - 6
  • [2] Special issue on "Towards robust explainable and interpretable artificial intelligence"
    Tomasiello, Stefania
    Feng, Feng
    Zhao, Yichuan
    [J]. EVOLUTIONARY INTELLIGENCE, 2024, 17 (1) : 417 - 418
  • [3] Special issue on “Towards robust explainable and interpretable artificial intelligence”
    Stefania Tomasiello
    Feng Feng
    Yichuan Zhao
    [J]. Evolutionary Intelligence, 2024, 17 : 417 - 418
  • [4] Automated and Interpretable Fake News Detection With Explainable Artificial Intelligence
    Giri, Moyank
    Eswaran, Sivaraman
    Honnavalli, Prasad
    Daniel, D.
    [J]. JOURNAL OF APPLIED SECURITY RESEARCH, 2024,
  • [5] Robust, Generalizable, and Interpretable Artificial Intelligence-Derived Brain Fingerprints of Autism and Social Communication Symptom Severity
    Supekar, Kaustubh
    Ryali, Srikanth
    Yuan, Rui
    Kumar, Devinder
    Angeles, Carlo De Los
    Menon, Vinod
    [J]. BIOLOGICAL PSYCHIATRY, 2022, 92 (08) : 643 - 653
  • [6] Deep Grading Based on Collective Artificial Intelligence for AD Diagnosis and Prognosis
    Huy-Dung Nguyen
    Clement, Michael
    Mansencal, Boris
    Coupe, Pierrick
    [J]. INTERPRETABILITY OF MACHINE INTELLIGENCE IN MEDICAL IMAGE COMPUTING, AND TOPOLOGICAL DATA ANALYSIS AND ITS APPLICATIONS FOR MEDICAL DATA, 2021, 12929 : 24 - 33
  • [7] Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)
    Aslam, Nida
    Khan, Irfan Ullah
    Mirza, Samiha
    AlOwayed, Alanoud
    Anis, Fatima M.
    Aljuaid, Reef M.
    Baageel, Reham
    [J]. SUSTAINABILITY, 2022, 14 (12)
  • [8] Retinoblastoma Detection via Image Processing and Interpretable Artificial Intelligence Techniques
    Duraivenkatesh, Surya
    Narayan, Aditya
    Srikanth, Vishak
    Made, Adamou Fode
    [J]. 2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 166 - 167
  • [9] DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence
    Wani, Niyaz Ahmad
    Kumar, Ravinder
    Bedi, Jatin
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 243
  • [10] FOG-IT - Towards Personalized Freezing of Gait detection using Artificial Intelligence
    Filtjens, B.
    Ghosh, N.
    Slaets, P.
    Vanrumste, B.
    Nieuwboer, A.
    Ginis, P.
    [J]. MOVEMENT DISORDERS, 2021, 36 : S558 - S559