Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review

被引:33
|
作者
Ladbury, Colton [1 ]
Zarinshenas, Reza [1 ]
Semwal, Hemal [2 ,3 ]
Tam, Andrew [1 ]
Vaidehi, Nagarajan [4 ]
Rodin, Andrei S. [4 ]
Liu, An [1 ]
Glaser, Scott [1 ]
Salgia, Ravi [5 ]
Amini, Arya [1 ,6 ]
机构
[1] City Hope Natl Med Ctr, Dept Radiat Oncol, Duarte, CA USA
[2] Univ Calif Los Angeles, Dept Bioengn, Los Angeles, CA USA
[3] Univ Calif Los Angeles, Dept Integrated Biol & Physiol, Los Angeles, CA USA
[4] City Hope Natl Med Ctr, Dept Computat & Quantitat Med, Duarte, CA USA
[5] City Hope Natl Med Ctr, Dept Med Oncol, Duarte, CA USA
[6] City Hope Natl Med Ctr, Dept Radiat Oncol, 1500 Duarte Rd, Duarte, CA 91010 USA
关键词
Explainable artificial intelligence (XAI); Local Interpretable Model-agnostic Explanations (LIME); machine learning (ML); SHapley Additive exPlanations (SHAP); MACHINE LEARNING-MODELS; OPEN-LABEL; CANCER; RISK; RADIOTHERAPY; RADIOMICS; RADIATION; DIAGNOSIS; SYSTEM;
D O I
10.21037/tcr-22-1626
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Background and Objective: Machine learning (ML) models are increasingly being utilized in oncology research for use in the clinic. However, while more complicated models may provide improvements in predictive or prognostic power, a hurdle to their adoption are limits of model interpretability, wherein the inner workings can be perceived as a "black box". Explainable artificial intelligence (XAI) frameworks including Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are novel, model-agnostic approaches that aim to provide insight into the inner workings of the "black box" by producing quantitative visualizations of how model predictions are calculated. In doing so, XAI can transform complicated ML models into easily understandable charts and interpretable sets of rules, which can give providers with an intuitive understanding of the knowledge generated, thus facilitating the deployment of such models in routine clinical workflows.Methods: We performed a comprehensive, non-systematic review of the latest literature to define use cases of model-agnostic XAI frameworks in oncologic research. The examined database was PubMed/MEDLINE. The last search was run on May 1, 2022.Key Content and Findings: In this review, we identified several fields in oncology research where ML models and XAI were utilized to improve interpretability, including prognostication, diagnosis, radiomics, pathology, treatment selection, radiation treatment workflows, and epidemiology. Within these fields, XAI facilitates determination of feature importance in the overall model, visualization of relationships and/ or interactions, evaluation of how individual predictions are produced, feature selection, identification of prognostic and/or predictive thresholds, and overall confidence in the models, among other benefits. These examples provide a basis for future work to expand on, which can facilitate adoption in the clinic when the complexity of such modeling would otherwise be prohibitive.Conclusions: Model-agnostic XAI frameworks offer an intuitive and effective means of describing oncology ML models, with applications including prognostication and determination of optimal treatment regimens. Using such frameworks presents an opportunity to improve understanding of ML models, which is a critical step to their adoption in the clinic.
引用
收藏
页码:3853 / 3868
页数:16
相关论文
共 50 条
  • [1] Model-agnostic explainable artificial intelligence for object detection in image data
    Moradi, Milad
    Yan, Ke
    Colwell, David
    Samwald, Matthias
    Asgari, Rhona
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 137
  • [2] Model-Agnostic Scoring Methods for Artificial Intelligence Assurance
    Sikder, Md Nazmul Kabir
    Batarseh, Feras A.
    Wang, Pei
    Gorentala, Nitish
    2022 IEEE 29TH ANNUAL SOFTWARE TECHNOLOGY CONFERENCE (STC 2022), 2022, : 9 - 18
  • [3] Explainable vs. interpretable artificial intelligence frameworks in oncology
    Bertsimas, Dimitris
    Margonis, Georgios Antonios
    TRANSLATIONAL CANCER RESEARCH, 2023, 12 (02) : 217 - 220
  • [4] Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data
    Nambiar, Athira
    Harikrishnaa, S.
    Sharanprasath, S.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [5] Multi-scale Local Explanation Approach for Image Analysis Using Model-Agnostic Explainable Artificial Intelligence (XAI)
    Hajiyan, Hooria
    Ebrahimi, Mehran
    MEDICAL IMAGING 2023, 2023, 12471
  • [6] Explainable Model-Agnostic Similarity and Confidence in Face Verification
    Knoche, Martin
    Teepe, Torben
    Hoermann, Stefan
    Rigoll, Gerhard
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW), 2023, : 711 - 718
  • [7] MODEL-AGNOSTIC META-LEARNING FOR RESILIENCE OPTIMIZATION OF ARTIFICIAL INTELLIGENCE SYSTEM
    Moskalenko, V. V.
    RADIO ELECTRONICS COMPUTER SCIENCE CONTROL, 2023, (02) : 79 - 90
  • [8] A Generic and Model-Agnostic Exemplar Synthetization Framework for Explainable AI
    Barbalau, Antonio
    Cosma, Adrian
    Ionescu, Radu Tudor
    Popescu, Marius
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 190 - 205
  • [9] Beauty is in the explainable artificial intelligence (XAI) of the "agnostic" beholder
    Laios, Alexandros
    De Jong, Diederick
    Kalampokis, Evangelos
    TRANSLATIONAL CANCER RESEARCH, 2023, 12 (02) : 226 - 229
  • [10] Artificial intelligence innovations in neurosurgical oncology: a narrative review
    Baker, Clayton R.
    Pease, Matthew
    Sexton, Daniel P.
    Abumoussa, Andrew
    Chambless, Lola B.
    JOURNAL OF NEURO-ONCOLOGY, 2024, 169 (03) : 489 - 496