INFORMER- Interpretability Founded Monitoring of Medical Image Deep Learning Models

被引:0
|
作者
Shu, Shelley Zixin [1 ]
de Mortanges, Aurelie Pahud [1 ]
Poellinger, Alexander [2 ,3 ]
Mahapatra, Dwarikanath [4 ]
Reyes, Mauricio [1 ,5 ,6 ]
机构
[1] Univ Bern, ARTORG Ctr Biomed Engn Res, Murtenstr 50, CH-3008 Bern, Switzerland
[2] Bern Univ Hosp, Inselspital, CH-3010 Bern, Switzerland
[3] Insel Grp Bern Univ Inst Diagnost Intervent & Pad, Bern, Switzerland
[4] Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
[5] Bern Univ Hosp, Dept Radiat Oncol, Inselspital, Bern, Switzerland
[6] Univ Bern, Bern, Switzerland
基金
芬兰科学院; 瑞士国家科学基金会;
关键词
Interpretability; Quality Control; Multi-label Classification; Medical Images; Deep learning; SEGMENTATION;
D O I
10.1007/978-3-031-73158-7_20
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models have gained significant attention due to their promising performance in medical image tasks. However, a gap remains between experimental accuracy and real-world applications. The inherited black-box nature of the deep learning model introduces uncertainty, trustworthy issues, and difficulties in performing quality control of deployed deep learning models. While quality control methods focusing on uncertainty estimation for segmentation tasks exist, there are comparatively fewer approaches for classification, particularly in multilabel datasets. This paper addresses this gap by proposing a quality control method that bridges interpretability and uncertainty estimation through a graph-based class distinctiveness calculation. Using the CheXpert dataset, the proposed approach achieved a higher F-1 score on the bootstrapped test set compared to baselines quality control approaches based on predictive entropy and test-time augmentation.
引用
收藏
页码:215 / 224
页数:10
相关论文
共 50 条
  • [21] Reproducibility of Training Deep Learning Models for Medical Image Analysis
    Bosma, Joeran Sander
    Peeters, Dré
    Alves, Natália
    Saha, Anindo
    Saghir, Zaigham
    Jacobs, Colin
    Huisman, Henkjan
    Proceedings of Machine Learning Research, 2023, 227 : 1269 - 1287
  • [22] Adversarial training and attribution methods enable evaluation of robustness and interpretability of deep learning models for image classification
    Santos, Flavio A. O.
    Zanchettin, Cleber
    Lei, Weihua
    Amaral, Luis A. Nunes
    PHYSICAL REVIEW E, 2024, 110 (05)
  • [23] Interpretability of deep learning models in analysis of Spanish financial text
    César Vaca
    Manuel Astorgano
    Alfonso J. López-Rivero
    Fernando Tejerina
    Benjamín Sahelices
    Neural Computing and Applications, 2024, 36 : 7509 - 7527
  • [24] Interpretability of deep learning models in analysis of Spanish financial text
    Vaca, Cesar
    Astorgano, Manuel
    Lopez-Rivero, Alfonso J.
    Tejerina, Fernando
    Sahelices, Benjamin
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (13): : 7509 - 7527
  • [25] Approach to provide interpretability in machine learning models for image classification
    Anja Stadlhofer
    Vitaliy Mezhuyev
    Industrial Artificial Intelligence, 1 (1):
  • [26] Rethinking Boundary Detection in Deep Learning Models for Medical Image Segmentation
    Lin, Yi
    Zhang, Dong
    Fang, Xiao
    Chen, Yufan
    Cheng, Kwang-Ting
    Chen, Hao
    INFORMATION PROCESSING IN MEDICAL IMAGING, IPMI 2023, 2023, 13939 : 730 - 742
  • [27] Gait Image Classification Using Deep Learning Models for Medical Diagnosis
    Vasudevan, Pavitra
    Mattins, R. Faerie
    Srivarshan, S.
    Narayanan, Ashvath
    Wadhwani, Gayatri
    Parvathi, R.
    Maheswari, R.
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 74 (03): : 6039 - 6063
  • [28] The Importance of Image Resolution in Building Deep Learning Models for Medical Imaging
    Lakhani, Paras
    RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2020, 2 (01)
  • [29] Deep Learning Models for Medical Image Analysis: Challenges and Future Directions
    Agrawal, R. K.
    Juneja, Akanksha
    BIG DATA ANALYTICS (BDA 2019), 2019, 11932 : 20 - 32
  • [30] Deep Learning Image Analysis of Nanoplasmonic Sensors: Toward Medical Breath Monitoring
    Zhao, Yangyang
    Dong, Boqun
    Benkstein, Kurt D.
    Chen, Lei
    Steffens, Kristen L.
    Semancik, Steve
    ACS APPLIED MATERIALS & INTERFACES, 2022, 14 (49) : 54411 - 54422