A probabilistic approach for interpretable deep learning in liver cancer diagnosis

被引:5
|
作者
Wang, Clinton J. [1 ]
Hamm, Charlie A. [1 ,2 ,3 ,4 ,5 ]
Letzen, Brian S. [1 ]
Duncan, James S. [1 ,6 ]
机构
[1] Yale Sch Med, Dept Radiol & Biomed Imaging, 333 Cedar St, New Haven, CT 06520 USA
[2] Charite Univ Med Berlin, Inst Radiol, D-10117 Berlin, Germany
[3] Free Univ Berlin, D-10117 Berlin, Germany
[4] Humboldt Univ, D-10117 Berlin, Germany
[5] Berlin Inst Hlth, D-10117 Berlin, Germany
[6] Yale Sch Engn & Appl Sci, Dept Biomed Engn, New Haven, CT 06520 USA
基金
美国国家卫生研究院;
关键词
interpretable; liver cancer; deep learning; convolutional neural network; diagnostic radiology;
D O I
10.1117/12.2512473
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Despite rapid advances in deep learning applications for radiological diagnosis and prognosis, the clinical adoption of such models is limited by their inability to explain or justify their predictions. This work developed a probabilistic approach for interpreting the predictions of a convolutional neural network (CNN) trained to classify liver lesions from multiphase magnetic resonance imaging (MRI). It determined the presence of 14 radiological features, where each lesion image contained one to four features and only ten examples of each feature were provided. Using stochastic forward passes of these example images through a trained CNN, samples were obtained from each feature's conditional probability distribution over the network's intermediate outputs. The marginal distribution was sampled with stochastic forward passes of images from the entire training dataset, and sparse kernel density estimation (KDE) was used to infer which features were present in a test set of 60 lesion images. This approach was tested on a CNN that reached 89.7% accuracy in classifying six types of liver lesions. It identified radiological features with 72.2 +/- 2.2% precision and 82.6 +/- 2.0% recall. In contrast with previous interpretability approaches, this method used sparsely labeled data, did not change the CNN architecture, and directly outputted radiological descriptors of each image. This approach can identify and explain potential failure modes in a CNN, as well as make a CNN's predictions more transparent to radiologists. Such contributions could facilitate the clinical translation of deep learning in a wide range of diagnostic and prognostic applications.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Interpretable Deep Learning for Probabilistic MJO Prediction
    Delaunay, Antoine
    Christensen, Hannah M.
    GEOPHYSICAL RESEARCH LETTERS, 2022, 49 (16)
  • [2] Monkeypox Diagnosis With Interpretable Deep Learning
    Ahsan, Md. Manjurul
    Ali, Md. Shahin
    Hassan, Md. Mehedi
    Abdullah, Tareque Abu
    Gupta, Kishor Datta
    Bagci, Ulas
    Kaushal, Chetna
    Soliman, Naglaa F.
    IEEE ACCESS, 2023, 11 : 81965 - 81980
  • [3] SmartSkin-XAI: An Interpretable Deep Learning Approach for Enhanced Skin Cancer Diagnosis in Smart Healthcare
    Hamim, Sultanul Arifeen
    Tamim, Mubasshar U. I.
    Mridha, M. F.
    Safran, Mejdl
    Che, Dunren
    DIAGNOSTICS, 2025, 15 (01)
  • [4] A Novel Bio-Inspired Deep Learning Approach for Liver Cancer Diagnosis
    Ghoniem, Rania M.
    INFORMATION, 2020, 11 (02)
  • [5] Interpretable Probabilistic Password Strength Meters via Deep Learning
    Pasquini, Dario
    Ateniese, Giuseppe
    Bernaschi, Massimo
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 502 - 522
  • [6] Deep learning approach for breast cancer diagnosis
    Rashed, Essam
    Abou El Seoud, M. Samir
    PROCEEDINGS OF 2019 8TH INTERNATIONAL CONFERENCE ON SOFTWARE AND INFORMATION ENGINEERING (ICSIE 2019), 2019, : 243 - 247
  • [7] Clinical Interpretable Deep Learning Model for Glaucoma Diagnosis
    Liao, WangMin
    Zou, BeiJi
    Zhao, RongChang
    Chen, YuanQiong
    He, ZhiYou
    Zhou, MengJie
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2020, 24 (05) : 1405 - 1412
  • [8] Towards an interpretable deep learning model of cancer
    Nilsson, Avlant
    Meimetis, Nikolaos
    Lauffenburger, Douglas A.
    NPJ PRECISION ONCOLOGY, 2025, 9 (01)
  • [9] Pathologist-level interpretable whole-slide cancer diagnosis with deep learning
    Zizhao Zhang
    Pingjun Chen
    Mason McGough
    Fuyong Xing
    Chunbao Wang
    Marilyn Bui
    Yuanpu Xie
    Manish Sapkota
    Lei Cui
    Jasreman Dhillon
    Nazeel Ahmad
    Farah K. Khalil
    Shohreh I. Dickinson
    Xiaoshuang Shi
    Fujun Liu
    Hai Su
    Jinzheng Cai
    Lin Yang
    Nature Machine Intelligence, 2019, 1 : 236 - 245
  • [10] Pathologist-level interpretable whole-slide cancer diagnosis with deep learning
    Zhang, Zizhao
    Chen, Pingjun
    McGough, Mason
    Xing, Fuyong
    Wang, Chunbao
    Bui, Marilyn
    Xie, Yuanpu
    Sapkota, Manish
    Cui, Lei
    Dhillon, Jasreman
    Ahmad, Nazeel
    Khalil, Farah K.
    Dickinson, Shohreh I.
    Shi, Xiaoshuang
    Liu, Fujun
    Su, Hai
    Cai, Jinzheng
    Yang, Lin
    NATURE MACHINE INTELLIGENCE, 2019, 1 (05) : 236 - +