INFORMER- Interpretability Founded Monitoring of Medical Image Deep Learning Models

被引:0
|
作者
Shu, Shelley Zixin [1 ]
de Mortanges, Aurelie Pahud [1 ]
Poellinger, Alexander [2 ,3 ]
Mahapatra, Dwarikanath [4 ]
Reyes, Mauricio [1 ,5 ,6 ]
机构
[1] Univ Bern, ARTORG Ctr Biomed Engn Res, Murtenstr 50, CH-3008 Bern, Switzerland
[2] Bern Univ Hosp, Inselspital, CH-3010 Bern, Switzerland
[3] Insel Grp Bern Univ Inst Diagnost Intervent & Pad, Bern, Switzerland
[4] Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
[5] Bern Univ Hosp, Dept Radiat Oncol, Inselspital, Bern, Switzerland
[6] Univ Bern, Bern, Switzerland
基金
芬兰科学院; 瑞士国家科学基金会;
关键词
Interpretability; Quality Control; Multi-label Classification; Medical Images; Deep learning; SEGMENTATION;
D O I
10.1007/978-3-031-73158-7_20
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models have gained significant attention due to their promising performance in medical image tasks. However, a gap remains between experimental accuracy and real-world applications. The inherited black-box nature of the deep learning model introduces uncertainty, trustworthy issues, and difficulties in performing quality control of deployed deep learning models. While quality control methods focusing on uncertainty estimation for segmentation tasks exist, there are comparatively fewer approaches for classification, particularly in multilabel datasets. This paper addresses this gap by proposing a quality control method that bridges interpretability and uncertainty estimation through a graph-based class distinctiveness calculation. Using the CheXpert dataset, the proposed approach achieved a higher F-1 score on the bootstrapped test set compared to baselines quality control approaches based on predictive entropy and test-time augmentation.
引用
收藏
页码:215 / 224
页数:10
相关论文
共 50 条
  • [31] Transparency of deep neural networks for medical image analysis: A review of interpretability methods
    Salahuddin, Zohaib
    Woodruff, Henry C.
    Chatterjee, Avishek
    Lambin, Philippe
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 140
  • [32] Improvement of Deep Learning Models by Excluding Inappropriate Data Based on Interpretability
    Yamaguchi, Saneyasu
    Hirabayashi, Fuma
    Tamekuri, Atsuki
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 291 - 296
  • [33] Unveiling Interpretability: Analyzing Transfer Learning in Deep Learning Models for Traffic Sign Recognition
    Waziry S.
    Rasheed J.
    Ghabban F.M.
    Alsubai S.
    Elkiran H.
    Alqahtani A.
    SN Computer Science, 5 (6)
  • [34] Deep Learning for Medical Image Analysis
    Panda, Saswat
    Parida, Sahul Kumar
    Khatri, Ripusudan
    Kaur, Rupinder
    COMMUNICATION AND INTELLIGENT SYSTEMS, VOL 3, ICCIS 2023, 2024, 969 : 409 - 429
  • [35] Deep Learning in Medical Image Analysis
    Zhang, Yudong
    Gorriz, Juan Manuel
    Dong, Zhengchao
    JOURNAL OF IMAGING, 2021, 7 (04)
  • [36] Deep Learning in Medical Image Analysis
    Chan, Heang-Ping
    Samala, Ravi K.
    Hadjiiski, Lubomir M.
    Zhou, Chuan
    DEEP LEARNING IN MEDICAL IMAGE ANALYSIS: CHALLENGES AND APPLICATIONS, 2020, 1213 : 3 - 21
  • [37] Deep learning on medical image analysis
    Wang, Jiaji
    Wang, Shuihua
    Zhang, Yudong
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2025, 10 (01) : 1 - 35
  • [38] Deep Learning in Medical Image Analysis
    Shen, Dinggang
    Wu, Guorong
    Suk, Heung-Il
    ANNUAL REVIEW OF BIOMEDICAL ENGINEERING, VOL 19, 2017, 19 : 221 - 248
  • [39] Deep learning in medical image registration
    Chen, Xiang
    Diaz-Pinto, Andres
    Ravikumar, Nishant
    Frangi, Alejandro F.
    PROGRESS IN BIOMEDICAL ENGINEERING, 2021, 3 (01):
  • [40] On Deep Learning for Medical Image Analysis
    Carin, Lawrence
    Pencina, Michael J.
    JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2018, 320 (11): : 1192 - 1193