Exploring transparency: A comparative analysis of explainable artificial intelligence techniques in retinography images to support the diagnosis of glaucoma

被引:0
|
作者
Vieira, Cleverson [1 ]
Rocha, Leonardo [1 ]
Guimarães, Marcelo [2 ]
Dias, Diego [3 ]
机构
[1] Computer Science Department/Federal University of São João del-Rei – UFSJ, São João del- Rei, MG, Brazil
[2] Federal University of São Paulo – UNIFESP, SP, Osasco, Brazil
[3] Statistics Department/Federal University of Espírito Santo – UFES, ES, Vitória, Brazil
关键词
Adversarial machine learning - Clinical research - Contrastive Learning - Ophthalmology;
D O I
10.1016/j.compbiomed.2024.109556
中图分类号
学科分类号
摘要
Machine learning models are widely applied across diverse fields, including nearly all segments of human activity. In healthcare, artificial intelligence techniques have revolutionized disease diagnosis, particularly in image classification. Although these models have achieved significant results, their lack of explainability has limited widespread adoption in clinical practice. In medical environments, understanding AI model decisions is essential not only for healthcare professionals’ trust but also for regulatory compliance, patient safety, and accountability in case of failures. Glaucoma, a neurodegenerative eye disease, can lead to irreversible blindness, making early detection crucial for preventing vision loss. Automated glaucoma detection has been a focus of intensive research in computer vision, with numerous studies proposing the use of convolutional neural networks (CNNs) to analyze retinal fundus images and diagnose the disease automatically. However, these models often lack the necessary explainability, which is essential for ophthalmologists to understand and justify their decisions to patients. This paper explores and applies explainable artificial intelligence (XAI) techniques to different CNN architectures for glaucoma classification, comparing which explanation technique offers the best interpretive resources for clinical diagnosis. We propose a new approach, SCIM (SHAP-CAM Interpretable Mapping), which has shown promising results. The experiments were conducted with an ophthalmology specialist who highlighted that CAM-based interpretability, applied to the VGG16 and VGG19 architectures, stands out as the most effective resource for promoting interpretability and supporting diagnosis. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [1] Exploring explainable artificial intelligence techniques for evaluating cervical intraepithelial neoplasia (CIN) diagnosis using colposcopy images
    Hussain, Elima
    Mahanta, Lipi B.
    Borbora, Khurshid A.
    Borah, Himakshi
    Choudhury, Saswati S.
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [2] Enhancing COVID-19 Diagnosis Accuracy and Transparency with Explainable Artificial Intelligence (XAI) Techniques
    Sonika Malik
    Preeti Rathee
    SN Computer Science, 5 (7)
  • [3] Exploring the Landscape of Explainable Artificial Intelligence (XAI): A Systematic Review of Techniques and Applications
    Hamida, Sayda Umma
    Chowdhury, Mohammad Jabed Morshed
    Chakraborty, Narayan Ranjan
    Biswas, Kamanashis
    Sami, Shahrab Khan
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (11)
  • [4] Interpretability and Transparency of Machine Learning in File Fragment Analysis with Explainable Artificial Intelligence
    Jinad, Razaq
    Islam, A. B. M.
    Shashidhar, Narasimha
    ELECTRONICS, 2024, 13 (13)
  • [5] Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning
    Yigit, Tuncay
    Sengoz, Nilgun
    Ozmen, Ozlem
    Hemanth, Jude
    Isik, Ali Hakan
    TRAITEMENT DU SIGNAL, 2022, 39 (03) : 863 - 869
  • [6] Exploring the significance of the frontal lobe for diagnosis of schizophrenia using explainable artificial intelligence and group level analysis
    Varaprasad, S. A.
    Goel, Tripti
    PSYCHIATRY RESEARCH-NEUROIMAGING, 2025, 349
  • [7] A comprehensive evaluation of explainable Artificial Intelligence techniques in stroke diagnosis: A systematic review
    Gurmessa, Daraje Kaba
    Jimma, Worku
    COGENT ENGINEERING, 2023, 10 (02):
  • [8] Exploring cross-national divide in government adoption of artificial intelligence: Insights from explainable artificial intelligence techniques
    Wang, Shangrui
    Xiao, Yiming
    Liang, Zheng
    TELEMATICS AND INFORMATICS, 2024, 90
  • [9] Explainable Artificial Intelligence (xAI) for Safe Breast Core Biopsy Diagnosis Support
    Yousif, Mustafa
    Tosun, Akif
    Carter, Gloria
    Elishaev, Esther
    Zhao, Chengquan
    Wang, Tiannan
    Chennubhotla, S. Chakra
    Fine, Jeffrey
    LABORATORY INVESTIGATION, 2020, 100 (SUPPL 1) : 1495 - 1497
  • [10] Explainable Artificial Intelligence (xAI) for Safe Breast Core Biopsy Diagnosis Support
    Yousif, Mustafa
    Tosun, Akif
    Carter, Gloria
    Elishaev, Esther
    Zhao, Chengquan
    Wang, Tiannan
    Chennubhotla, S. Chakra
    Fine, Jeffrey
    MODERN PATHOLOGY, 2020, 33 (SUPPL 2) : 1495 - 1497