Explainable deep learning in healthcare: A methodological survey from an attribution view

被引:21
|
作者
Jin, Di [1 ]
Sergeeva, Elena [1 ]
Weng, Wei-Hung [1 ]
Chauhan, Geeticka [1 ]
Szolovits, Peter [1 ]
机构
[1] MIT, Comp Sci & Artificial Intelligence Lab, 77 Massachusetts Ave, Cambridge, MA 02139 USA
来源
WIRES MECHANISMS OF DISEASE | 2022年 / 14卷 / 03期
关键词
deep learning in medicine; interpretable deep learning; ADVERSARIAL ATTACKS; NEURAL-NETWORK; MACHINE; CANCER; CLASSIFICATIONS; ALGORITHM; MODELS; AI;
D O I
10.1002/wsbm.1548
中图分类号
R-3 [医学研究方法]; R3 [基础医学];
学科分类号
1001 ;
摘要
The increasing availability of large collections of electronic health record (EHR) data and unprecedented technical advances in deep learning (DL) have sparked a surge of research interest in developing DL based clinical decision support systems for diagnosis, prognosis, and treatment. Despite the recognition of the value of deep learning in healthcare, impediments to further adoption in real healthcare settings remain due to the black-box nature of DL. Therefore, there is an emerging need for interpretable DL, which allows end users to evaluate the model decision making to know whether to accept or reject predictions and recommendations before an action is taken. In this review, we focus on the interpretability of the DL models in healthcare. We start by introducing the methods for interpretability in depth and comprehensively as a methodological reference for future researchers or clinical practitioners in this field. Besides the methods' details, we also include a discussion of advantages and disadvantages of these methods and which scenarios each of them is suitable for, so that interested readers can know how to compare and choose among them for use. Moreover, we discuss how these methods, originally developed for solving general-domain problems, have been adapted and applied to healthcare problems and how they can help physicians better understand these data-driven technologies. Overall, we hope this survey can help researchers and practitioners in both artificial intelligence and clinical fields understand what methods we have for enhancing the interpretability of their DL models and choose the optimal one accordingly. This article is categorized under: Cancer > Computational Models
引用
收藏
页数:25
相关论文
共 50 条
  • [1] Explainable, trustworthy, and ethical machine learning for healthcare: A survey
    Rasheed, Khansa
    Qayyum, Adnan
    Ghaly, Mohammed
    Al-Fuqaha, Ala
    Razi, Adeel
    Qadir, Junaid
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 149
  • [2] Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning
    Wang, Hanjing
    Joshi, Dhiraj
    Wang, Shiqiang
    Ji, Qiang
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 12044 - 12053
  • [3] From explainable to interpretable deep learning for natural language processing in healthcare: How far from reality?
    Huang, Guangming
    Li, Yingya
    Jameel, Shoaib
    Long, Yunfei
    Papanastasiou, Giorgos
    [J]. COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL, 2024, 24 : 362 - 373
  • [4] Explainable Deep Learning Methods in Medical Image Classification: A Survey
    Patricio, Cristiano
    Neves, Joao C.
    Lincs, Nova
    Teixeira, Luis F.
    [J]. ACM COMPUTING SURVEYS, 2024, 56 (04)
  • [5] Unraveling the Black Box: A Review of Explainable Deep Learning Healthcare Techniques
    Murad, Nafeesa Yousuf
    Hasan, Mohd Hilmi
    Azam, Muhammad Hamza
    Yousuf, Nadia
    Yalli, Jameel Shehu
    [J]. IEEE ACCESS, 2024, 12 : 66556 - 66568
  • [6] A PERCEPTUAL VIEW OF ATTRIBUTION - THEORETICAL AND METHODOLOGICAL IMPLICATIONS
    LOWE, CA
    KASSIN, SM
    [J]. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN, 1980, 6 (04) : 532 - 542
  • [7] Survey of Explainable AI Techniques in Healthcare
    Chaddad, Ahmad
    Peng, Jihao
    Xu, Jian
    Bouridane, Ahmed
    [J]. SENSORS, 2023, 23 (02)
  • [8] A Survey on Deep Learning Techniques for Predictive Analytics in Healthcare
    Mohammed Badawy
    Nagy Ramadan
    Hesham Ahmed Hefny
    [J]. SN Computer Science, 5 (7)
  • [9] Toward Explainable Deep Learning
    Balasubramanian, Vineeth N.
    [J]. COMMUNICATIONS OF THE ACM, 2022, 65 (11) : 68 - 69
  • [10] Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments
    Bai, Xiao
    Wang, Xiang
    Liu, Xianglong
    Liu, Qiang
    Song, Jingkuan
    Sebe, Nicu
    Kim, Been
    [J]. PATTERN RECOGNITION, 2021, 120