Stay Focused - Enhancing Model Interpretability Through Guided Feature Training

被引:1
|
作者
Jenke, Alexander C. [1 ]
Bodenstedt, Sebastian [1 ]
Wagner, Martin [2 ]
Brandenburg, Johanna M. [2 ]
Stern, Antonia [3 ]
Muendermann, Lars [3 ]
Distler, Marius [4 ]
Weitz, Jurgen [4 ]
Mueller-Stich, Beat P. [2 ]
Speidel, Stefanie [1 ]
机构
[1] Natl Ctr Tumor Dis NCT, Partner Site Dresden, Dept Translat Surg Oncol, Dresden, Germany
[2] Heidelberg Univ, Dept Gen, Visceral & Transplantat Surg, Heidelberg, Germany
[3] KARL STORZ SE Co KG, Tuttlingen, Germany
[4] Tech Univ Dresden, Univ Hosp Carl Gustav Carus, Dept Visceral, Thorac & Vasc Surg,Fac Med, Dresden, Germany
关键词
Explainable artificial intelligence; Surgical data science; Instrument presence detection; Computer-assisted surgery;
D O I
10.1007/978-3-031-16437-8_12
中图分类号
R445 [影像诊断学];
学科分类号
100207 ;
摘要
In computer-assisted surgery, artificial intelligence (AI) methods need to be interpretable, as a clinician has to understand a model's decision. To improve the visual interpretability of convolutional neural network, we propose to indirectly guide the feature development process of the model with augmented training data in which unimportant regions in an image have been blurred. On a public dataset, we show that our proposed training workflow results in better visual interpretability of the model and improves the overall model performance. To numerically evaluate heat maps, produced by explainable AI methods, we propose a new metric evaluating the focus with regards to a mask of the region of interest. Further, we are able to show that the resulting model is more robust against changes in the background by focusing the features onto the important areas of the scene and therefore improve model generalization.
引用
收藏
页码:121 / 129
页数:9
相关论文
共 50 条
  • [1] Enhancing crack pixel segmentation: comparative assessment of feature combinations and model interpretability
    Rakshitha, R.
    Srinath, S.
    Kumar, N. Vinay
    Rashmi, S.
    Poornima, B. V.
    INNOVATIVE INFRASTRUCTURE SOLUTIONS, 2024, 9 (09)
  • [2] Improving Deep Learning Interpretability by Saliency Guided Training
    Ismail, Aya Abdelsalam
    Feizi, Soheil
    Bravo, Hector Corrada
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] ENHANCING THE INTERPRETABILITY OF TERAHERTZ DATA THROUGH UNSUPERVISED CLASSIFICATION
    Stephani, Henrike
    Herrmann, Michael
    Wiesauer, Karin
    Katletz, Stefan
    Heise, Bettina
    XIX IMEKO WORLD CONGRESS: FUNDAMENTAL AND APPLIED METROLOGY, PROCEEDINGS, 2009, : 2329 - 2334
  • [4] Enhancing Channelized Feature Interpretability Using Deep Learning Predictive Modeling
    Sahad, Salbiah Mad
    Nian Wei Tan
    Sajid, Muhammad
    Jones, Ernest Austin, Jr.
    Latiff, Abdul Halim Abdul
    APPLIED SCIENCES-BASEL, 2022, 12 (18):
  • [5] Enhancing VMAF through New Feature Integration and Model Combination
    Zhang, Fan
    Katsenou, Angeliki
    Bampis, Christos
    Krasula, Lukas
    Li, Zhi
    Bull, David
    2021 PICTURE CODING SYMPOSIUM (PCS), 2021, : 66 - 70
  • [6] A fuzzy clustering algorithm enhancing local model interpretability
    Diez, J. L.
    Navarro, J. L.
    Sala, A.
    SOFT COMPUTING, 2007, 11 (10) : 973 - 983
  • [7] VEER: enhancing the interpretability of model-based optimizations
    Peng, Kewen
    Kaltenecker, Christian
    Siegmund, Norbert
    Apel, Sven
    Menzies, Tim
    EMPIRICAL SOFTWARE ENGINEERING, 2023, 28 (03)
  • [8] A fuzzy clustering algorithm enhancing local model interpretability
    J. L. Díez
    J. L. Navarro
    A. Sala
    Soft Computing, 2007, 11 : 973 - 983
  • [9] VEER: enhancing the interpretability of model-based optimizations
    Kewen Peng
    Christian Kaltenecker
    Norbert Siegmund
    Sven Apel
    Tim Menzies
    Empirical Software Engineering, 2023, 28
  • [10] Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and Consistency
    Kadir, Md Abdul
    Addluri, GowthamKrishna
    Sonntag, Daniel
    ADVANCES IN ARTIFICIAL INTELLIGENCE, KI 2023, 2023, 14236 : 90 - 97