INFORMER- Interpretability Founded Monitoring of Medical Image Deep Learning Models

被引:0
|
作者
Shu, Shelley Zixin [1 ]
de Mortanges, Aurelie Pahud [1 ]
Poellinger, Alexander [2 ,3 ]
Mahapatra, Dwarikanath [4 ]
Reyes, Mauricio [1 ,5 ,6 ]
机构
[1] Univ Bern, ARTORG Ctr Biomed Engn Res, Murtenstr 50, CH-3008 Bern, Switzerland
[2] Bern Univ Hosp, Inselspital, CH-3010 Bern, Switzerland
[3] Insel Grp Bern Univ Inst Diagnost Intervent & Pad, Bern, Switzerland
[4] Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
[5] Bern Univ Hosp, Dept Radiat Oncol, Inselspital, Bern, Switzerland
[6] Univ Bern, Bern, Switzerland
基金
芬兰科学院; 瑞士国家科学基金会;
关键词
Interpretability; Quality Control; Multi-label Classification; Medical Images; Deep learning; SEGMENTATION;
D O I
10.1007/978-3-031-73158-7_20
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models have gained significant attention due to their promising performance in medical image tasks. However, a gap remains between experimental accuracy and real-world applications. The inherited black-box nature of the deep learning model introduces uncertainty, trustworthy issues, and difficulties in performing quality control of deployed deep learning models. While quality control methods focusing on uncertainty estimation for segmentation tasks exist, there are comparatively fewer approaches for classification, particularly in multilabel datasets. This paper addresses this gap by proposing a quality control method that bridges interpretability and uncertainty estimation through a graph-based class distinctiveness calculation. Using the CheXpert dataset, the proposed approach achieved a higher F-1 score on the bootstrapped test set compared to baselines quality control approaches based on predictive entropy and test-time augmentation.
引用
收藏
页码:215 / 224
页数:10
相关论文
共 50 条
  • [1] Enhancing interpretability and bias control in deep learning models for medical image analysis using generative AI
    Minutti-Martinez, Carlos
    Escalante-Ramirez, Boris
    Olveres, Jimena
    OPTICS, PHOTONICS, AND DIGITAL TECHNOLOGIES FOR IMAGING APPLICATIONS VIII, 2024, 12998
  • [2] Interpretability-Guided Inductive Bias For Deep Learning Based Medical Image
    Mahapatra, Dwarikanath
    Poellinger, Alexander
    Reyes, Mauricio
    MEDICAL IMAGE ANALYSIS, 2022, 81
  • [3] Deep learning models in medical image analysis
    Tsuneki, Masayuki
    JOURNAL OF ORAL BIOSCIENCES, 2022, 64 (03) : 312 - 320
  • [4] Survey on Interpretability of Deep Models for Image Classification
    Yang P.-B.
    Sang J.-T.
    Zhang B.
    Feng Y.-G.
    Yu J.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (01): : 230 - 254
  • [5] TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models
    Chatterjee, Soumick
    Das, Arnab
    Mandal, Chirag
    Mukhopadhyay, Budhaditya
    Vipinraj, Manish
    Shukla, Aniruddh
    Rao, Rajatha Nagaraja
    Sarasaen, Chompunuch
    Speck, Oliver
    Nuernberger, Andreas
    APPLIED SCIENCES-BASEL, 2022, 12 (04):
  • [6] A survey on the interpretability of deep learning in medical diagnosis
    Teng, Qiaoying
    Liu, Zhe
    Song, Yuqing
    Han, Kai
    Lu, Yang
    MULTIMEDIA SYSTEMS, 2022, 28 (06) : 2335 - 2355
  • [7] A survey on the interpretability of deep learning in medical diagnosis
    Qiaoying Teng
    Zhe Liu
    Yuqing Song
    Kai Han
    Yang Lu
    Multimedia Systems, 2022, 28 : 2335 - 2355
  • [8] Visual interpretability of bioimaging deep learning models
    Rotem, Oded
    Zaritsky, Assaf
    NATURE METHODS, 2024, 21 (08) : 1394 - 1397
  • [9] Applications of interpretability in deep learning models for ophthalmology
    Hanif, Adam M.
    Beqiri, Sara
    Keane, Pearse A.
    Campbell, J. Peter
    CURRENT OPINION IN OPHTHALMOLOGY, 2021, 32 (05) : 452 - 458
  • [10] Interpretability of Deep Learning Models: A Survey of Results
    Chakraborty, Supriyo
    Tomsett, Richard
    Raghavendra, Ramya
    Harborne, Daniel
    Alzantot, Moustafa
    Cerutti, Federico
    Srivastava, Mani
    Preece, Alun
    Julier, Simon
    Rao, Raghuveer M.
    Kelley, Troy D.
    Braines, Dave
    Sensoy, Murat
    Willis, Christopher J.
    Gurram, Prudhvi
    2017 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTED, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION (SMARTWORLD/SCALCOM/UIC/ATC/CBDCOM/IOP/SCI), 2017,