Identifying Critical Tokens for Accurate Predictions in Transformer-Based Medical Imaging Models

被引:0
|
作者
Kang, Solha [1 ]
Vankerschaver, Joris [1 ,2 ]
Ozbulak, Utku [1 ,3 ]
机构
[1] Ghent Univ Global Campus, Ctr Biosyst & Biotech Data Sci, Incheon, South Korea
[2] Univ Ghent, Dept Appl Math Comp Sci & Stat, Ghent, Belgium
[3] Univ Ghent, Dept Elect & Informat Syst, Ghent, Belgium
关键词
D O I
10.1007/978-3-031-73290-4_17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the advancements in self-supervised learning (SSL), transformer-based computer vision models have recently demonstrated superior results compared to convolutional neural networks (CNNs) and are poised to dominate the field of artificial intelligence (AI)-based medical imaging in the upcoming years. Nevertheless, similar to CNNs, unveiling the decision-making process of transformer-based models remains a challenge. In this work, we take a step towards demystifying the decision-making process of transformer-based medical imaging models and propose "Token Insight", a novel method that identifies the critical tokens that contribute to the prediction made by the model. Our method relies on the principled approach of token discarding native to transformer-based models, requires no additional module, and can be applied to any transformer model. Using the proposed approach, we quantify the importance of each token based on its contribution to the prediction and enable a more nuanced understanding of the model's decisions. Our experimental results which are showcased on the problem of colonic polyp identification using both supervised and self-supervised pretrained vision transformers indicate that Token Insight contributes to a more transparent and interpretable transformer-based medical imaging model, fostering trust and facilitating broader adoption in clinical settings.
引用
收藏
页码:169 / 179
页数:11
相关论文
共 50 条
  • [21] On Robustness of Finetuned Transformer-based NLP Models
    Neerudu, Pavan Kalyan Reddy
    Oota, Subba Reddy
    Marreddy, Mounika
    Kagita, Venkateswara Rao
    Gupta, Manish
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 7180 - 7195
  • [22] Adaptation of Transformer-Based Models for Depression Detection
    Adebanji, Olaronke O.
    Ojo, Olumide E.
    Calvo, Hiram
    Gelbukh, Irina
    Sidorov, Grigori
    COMPUTACION Y SISTEMAS, 2024, 28 (01): : 151 - 165
  • [23] Transformer-Based Network for Accurate Classification of Lung Auscultation Sounds
    Sonali C.S.
    Kiran J.
    Suma K.V.
    Chinmayi B.S.
    Easa M.
    Critical Reviews in Biomedical Engineering, 2023, 51 (06) : 1 - 16
  • [24] PoulTrans: a transformer-based model for accurate poultry condition assessment
    Jun Li
    Bing Yang
    Junyang Chen
    Jiaxin Liu
    Felix Kwame Amevor
    Guanyu Chen
    Buyuan Zhang
    Xiaoling Zhao
    Scientific Reports, 15 (1)
  • [25] Recent progress in transformer-based medical image analysis
    Liu, Zhaoshan
    Lv, Qiujie
    Yang, Ziduo
    Li, Yifan
    Lee, Chau Hung
    Shen, Lei
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 164
  • [26] Transformer-Based Classification of User Queries for Medical Consultancy
    Lyutkin, D. A.
    Pozdnyakov, D. V.
    Soloviev, A. A.
    Zhukov, D. V.
    Malik, M. S. I.
    Ignatov, D. I.
    AUTOMATION AND REMOTE CONTROL, 2024, 85 (03) : 297 - 308
  • [27] A Transformer-based Medical Visual Question Answering Model
    Liu, Lei
    Su, Xiangdong
    Guo, Hui
    Zhu, Daobin
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 1712 - 1718
  • [28] A Transformer-Based Network for Deformable Medical Image Registration
    Wang, Yibo
    Qian, Wen
    Li, Mengqi
    Zhang, Xuming
    ARTIFICIAL INTELLIGENCE, CICAI 2022, PT I, 2022, 13604 : 502 - 513
  • [29] Accurate, interpretable predictions of materials properties within transformer language models
    Korolev, Vadim
    Protsenko, Pavel
    PATTERNS, 2023, 4 (10):
  • [30] Ouroboros: On Accelerating Training of Transformer-Based Language Models
    Yang, Qian
    Huo, Zhouyuan
    Wang, Wenlin
    Huang, Heng
    Carin, Lawrence
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32