A probabilistic approach for interpretable deep learning in liver cancer diagnosis

被引:5
|
作者
Wang, Clinton J. [1 ]
Hamm, Charlie A. [1 ,2 ,3 ,4 ,5 ]
Letzen, Brian S. [1 ]
Duncan, James S. [1 ,6 ]
机构
[1] Yale Sch Med, Dept Radiol & Biomed Imaging, 333 Cedar St, New Haven, CT 06520 USA
[2] Charite Univ Med Berlin, Inst Radiol, D-10117 Berlin, Germany
[3] Free Univ Berlin, D-10117 Berlin, Germany
[4] Humboldt Univ, D-10117 Berlin, Germany
[5] Berlin Inst Hlth, D-10117 Berlin, Germany
[6] Yale Sch Engn & Appl Sci, Dept Biomed Engn, New Haven, CT 06520 USA
来源
MEDICAL IMAGING 2019: COMPUTER-AIDED DIAGNOSIS | 2019年 / 10950卷
基金
美国国家卫生研究院;
关键词
interpretable; liver cancer; deep learning; convolutional neural network; diagnostic radiology;
D O I
10.1117/12.2512473
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Despite rapid advances in deep learning applications for radiological diagnosis and prognosis, the clinical adoption of such models is limited by their inability to explain or justify their predictions. This work developed a probabilistic approach for interpreting the predictions of a convolutional neural network (CNN) trained to classify liver lesions from multiphase magnetic resonance imaging (MRI). It determined the presence of 14 radiological features, where each lesion image contained one to four features and only ten examples of each feature were provided. Using stochastic forward passes of these example images through a trained CNN, samples were obtained from each feature's conditional probability distribution over the network's intermediate outputs. The marginal distribution was sampled with stochastic forward passes of images from the entire training dataset, and sparse kernel density estimation (KDE) was used to infer which features were present in a test set of 60 lesion images. This approach was tested on a CNN that reached 89.7% accuracy in classifying six types of liver lesions. It identified radiological features with 72.2 +/- 2.2% precision and 82.6 +/- 2.0% recall. In contrast with previous interpretability approaches, this method used sparsely labeled data, did not change the CNN architecture, and directly outputted radiological descriptors of each image. This approach can identify and explain potential failure modes in a CNN, as well as make a CNN's predictions more transparent to radiologists. Such contributions could facilitate the clinical translation of deep learning in a wide range of diagnostic and prognostic applications.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Interpretable Fault Diagnosis for Overhead Lines with Covered Conductors: A Physics-Informed Deep Learning Approach
    Lu, Genghong
    Tsang, Chi Wai
    Yim, Ho Nam
    Lei, Chao
    Bu, Siqi
    Yung, Winco K. C.
    Pecht, Michael
    PROTECTION AND CONTROL OF MODERN POWER SYSTEMS, 2025, 10 (02) : 25 - 39
  • [22] Interpretable deep learning approach for oral cancer classification using guided attention inference network
    Figueroa, Kevin Chew
    Song, Bofan
    Sunny, Sumsum
    Li, Shaobai
    Gurushanth, Keerthi
    Mendonca, Pramila
    Mukhia, Nirza
    Patrick, Sanjana
    Gurudath, Shubha
    Raghavan, Subhashini
    Imchen, Tsusennaro
    Leivon, Shirley T.
    Kolur, Trupti
    Shetty, Vivek
    Bushan, Vidya
    Ramesh, Rohan
    Pillai, Vijay
    Wilder-Smith, Petra
    Sigamani, Alben
    Suresh, Amritha
    Kuriakose, Moni Abraham
    Birur, Praveen
    Liang, Rongguang
    JOURNAL OF BIOMEDICAL OPTICS, 2022, 27 (01)
  • [23] Bayesian deep learning: A model-based interpretable approach
    Matsubara, Takashi
    IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2020, 11 (01): : 16 - 35
  • [24] Online Advertising Revenue Forecasting: An Interpretable Deep Learning Approach
    Wurfel, Max
    Han, Qiwei
    Kaiser, Maximilian
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 1980 - 1989
  • [25] An Interpretable Deep Learning Approach for Detecting Marine Heatwaves Patterns
    He, Qi
    Zhu, Zihang
    Zhao, Danfeng
    Song, Wei
    Huang, Dongmei
    APPLIED SCIENCES-BASEL, 2024, 14 (02):
  • [26] An enhanced interpretable deep learning approach for diabetic retinopathy detection
    Alrajjou, Soha
    Boahen, Edward Kwadwo
    Menga, Chunyun
    Cheng, Keyang
    2022 INTERNATIONAL CONFERENCE ON CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY, CYBERC, 2022, : 127 - 135
  • [27] An Interpretable Deep Learning Approach for Morphological Script Type Analysis
    Vlachou-Efstathiou, Malamatenia
    Siglidis, Ioannis
    Stutzmann, Dominique
    Aubry, Mathieu
    DOCUMENT ANALYSIS AND RECOGNITION-ICDAR 2024 WORKSHOPS, PT II, 2024, 14936 : 3 - 21
  • [28] Publisher Correction: Pathologist-level interpretable whole-slide cancer diagnosis with deep learning
    Zizhao Zhang
    Pingjun Chen
    Mason McGough
    Fuyong Xing
    Chunbao Wang
    Marilyn Bui
    Yuanpu Xie
    Manish Sapkota
    Lei Cui
    Jasreman Dhillon
    Nazeel Ahmad
    Farah K. Khalil
    Shohreh I. Dickinson
    Xiaoshuang Shi
    Fujun Liu
    Hai Su
    Jinzheng Cai
    Lin Yang
    Nature Machine Intelligence, 2019, 1 : 384 - 384
  • [29] DeepBatch: A hybrid deep learning model for interpretable diagnosis of breast cancer in whole-slide images
    Zeiser, Felipe Andre
    da Costa, Cristiano Andre
    Ramos, Gabriel de Oliveira
    Bohn, Henrique C.
    Santos, Ismael
    Roehe, Adriana Vial
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 185
  • [30] Publisher Correction: Pathologist-level interpretable whole-slide cancer diagnosis with deep learning
    Zizhao Zhang
    Pingjun Chen
    Mason McGough
    Fuyong Xing
    Chunbao Wang
    Marilyn Bui
    Yuanpu Xie
    Manish Sapkota
    Lei Cui
    Jasreman Dhillon
    Nazeel Ahmad
    Farah K. Khalil
    Shohreh I. Dickinson
    Xiaoshuang Shi
    Fujun Liu
    Hai Su
    Jinzheng Cai
    Lin Yang
    Nature Machine Intelligence, 2019, 1 : 289 - 289