AI explainability in oculomics: How it works, its role in establishing trust, and what still needs to be addressed

被引:0
|
作者
An, Songyang [1 ,8 ]
Teo, Kelvin [2 ,3 ]
McConnell, Michael, V [4 ,8 ]
Marshall, John [5 ]
Galloway, Christopher [6 ]
Squirrell, David [7 ,8 ]
机构
[1] Univ Auckland, Sch Optometry & Vis Sci, Auckland, New Zealand
[2] Singapore Eye Res Inst, Acad,20 Coll Rd Discovery Tower Level 6, Singapore 169856, Singapore
[3] Singapore Natl Univ, Singapore, Singapore
[4] Stanford Univ, Sch Med, Div Cardiovasc Med, Stanford, CA USA
[5] UCL, Inst Ophthalmol, 11-43 Bath St, London EC1V 9EL, England
[6] Massey Univ, Dept Business & Commun, East Precinct Albany Expressway,SH17,Albany, Auckland 0632, New Zealand
[7] Univ Sunshine Coast, Dept Ophthalmol, Sunshine Coast, Qld, Australia
[8] Toku Eyes Ltd NZ, 110 Carlton Gore Rd,Newmarket, Auckland 1023, New Zealand
关键词
Retinal imaging; Artificial intelligence; Interpretable AI; Intrinsic interpretability; Post-hoc interpretability; Disease classification; Oculomics; ARTIFICIAL-INTELLIGENCE; CARDIOVASCULAR RISK; DIABETIC-RETINOPATHY; VALIDATION; PREDICTION; DECISIONS; DISEASE; MODELS;
D O I
10.1016/j.preteyeres.2025.101352
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable of predicting a range of systemic diseases from retinal images. Unlike traditional retinal disease detection AI models which are trained on well-recognised retinal biomarkers, systemic disease detection or "oculomics" models use a range of often poorly characterised retinal biomarkers to arrive at their predictions. As the retinal phenotype that oculomics models use may not be intuitive, clinicians have to rely on the developers' explanations of how these algorithms work in order to understand them. The discipline of understanding how AI algorithms work employs two similar but distinct terms: Explainable AI and Interpretable AI (iAI). Explainable AI describes the holistic functioning of an AI system, including its impact and potential biases. Interpretable AI concentrates solely on examining and understanding the workings of the AI algorithm itself. iAI tools are therefore what the clinician must rely on if they are to understand how the algorithm works and whether its predictions are reliable. The iAI tools that developers use can be delineated into two broad categories: Intrinsic methods that improve transparency through architectural changes and post-hoc methods that explain trained models via external algorithms. Currently post-hoc methods, class activation maps in particular, are far more widely used than other techniques but they have their limitations especially when applied to oculomics AI models. Aimed at clinicians, we examine how the key iAI methods work, what they are designed to do and what their limitations are when applied to oculomics AI. We conclude by discussing how combining existing iAI techniques with novel approaches could allow AI developers to better explain how their oculomics models work and reassure clinicians that the results issued are reliable.
引用
收藏
页数:24