Dynamic weighted knowledge distillation for brain tumor segmentation

被引:0
|
作者
An, Dianlong [1 ,2 ]
Liu, Panpan [1 ,2 ]
Feng, Yan [1 ]
Ding, Pengju [1 ,2 ]
Zhou, Weifeng [3 ]
Yu, Bin [2 ,4 ]
机构
[1] Qingdao Univ Sci & Technol, Coll Informat Sci & Technol, Qingdao 266061, Peoples R China
[2] Qingdao Univ Sci & Technol, Sch Data Sci, Qingdao 266061, Peoples R China
[3] Qingdao Univ Sci & Technol, Coll Math & Phys, Qingdao 266061, Peoples R China
[4] Univ Sci & Technol China, Sch Artificial Intelligence & Data Sci, Hefei 230026, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain tumor segmentation; MRI; Static knowledge distillation; Dynamic weighted knowledge distillation; Interpretability;
D O I
10.1016/j.patcog.2024.110731
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic 3D MRI brain tumor segmentation holds a crucial position in the field of medical image analysis, contributing significantly to the clinical diagnosis and treatment of brain tumors. However, traditional 3D brain tumor segmentation methods often entail extensive parameters and computational demands, posing substantial challenges in model training and deployment. To overcome these challenges, this paper introduces a brain tumor segmentation framework based on knowledge distillation. This framework includes training a lightweight network by extracting knowledge from a well-established brain tumor segmentation network. Firstly, this framework replaces the conventional static knowledge distillation (SKD) with the proposed dynamic weighted knowledge distillation (DWKD). DWKD dynamically adjusts the distillation loss weights for each pixel based on the learning state of the student network. Secondly, to enhance the student network's generalization capability, this paper customizes a loss function for DWKD, known as regularized cross-entropy (RCE). RCE introduces controlled noise into the model, enhancing its robustness and diminishing the risk of overfitting. This controlled injection of noise aids in fortifying the model's robustness. Lastly, Empirical validation of the proposed methodology is conducted using two distinct backbone networks, namely Attention U-Net and Residual U-Net. Rigorous experimentation is executed across the BraTS 2019, BraTS 2020, and BraTS 2021 datasets. Experimental results demonstrate that DWKD exhibits significant advantages over SKD in enhancing the segmentation performance of the student network. Furthermore, when dealing with limited training data, the RCE method can further improve the student network's segmentation performance. Additionally, this paper quantitatively analyzes the number of concept detectors identified in network dissection. It assesses the impact of DWKD on model interpretability and finds that compared to SKD, DWKD can more effectively enhance model interpretability. The source code is available at https://github.com/YuBinLab-QUST/DWKD/.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Segmentation and quantification of brain tumor
    Jiang, CY
    Zhang, XH
    Huang, WJ
    Meinel, C
    2004 IEEE SYMPOSIUM ON VIRTUAL ENVIRONMENTS, HUMAN-COMPUTRE INTERFACES AND MEASUREMENT SYSTEMS, 2004, : 61 - 66
  • [42] DIFFERENTIABLE DYNAMIC CHANNEL ASSOCIATION FOR KNOWLEDGE DISTILLATION
    Tang, Qiankun
    Xu, Xiaogang
    Wang, Jun
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 414 - 418
  • [43] Robustness of brain tumor segmentation
    Mueller, Sabine
    Weickert, Joachim
    Graf, Norbert
    JOURNAL OF MEDICAL IMAGING, 2020, 7 (06)
  • [44] Automatic brain tumor segmentation
    Clark, MC
    Hall, LO
    Goldgof, DB
    Velthuizen, R
    Murtaugh, FR
    Silbiger, MS
    MEDICAL IMAGING 1998: IMAGE PROCESSING, PTS 1 AND 2, 1998, 3338 : 533 - 544
  • [45] Brain abscess and cystic brain tumor - Discrimination with dynamic susceptibility contrast perfusion-weighted MRI
    Erdogan, C
    Hakyemez, B
    Yildirim, N
    Parlak, M
    JOURNAL OF COMPUTER ASSISTED TOMOGRAPHY, 2005, 29 (05) : 663 - 667
  • [46] Bilateral Knowledge Distillation for Unsupervised Domain Adaptation of Semantic Segmentation
    Wang, Yunnan
    Li, Jianxun
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 10177 - 10184
  • [47] Segmentation with mixed supervision: Confidence maximization helps knowledge distillation
    Liu, Bingyuan
    Desrosiers, Christian
    Ben Ayed, Ismail
    Dolz, Jose
    MEDICAL IMAGE ANALYSIS, 2023, 83
  • [48] Multi-view knowledge distillation for efficient semantic segmentation
    Wang, Chen
    Zhong, Jiang
    Dai, Qizhu
    Qi, Yafei
    Shi, Fengyuan
    Fang, Bin
    Li, Xue
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2023, 20 (02)
  • [49] Robust Semantic Segmentation With Multi-Teacher Knowledge Distillation
    Amirkhani, Abdollah
    Khosravian, Amir
    Masih-Tehrani, Masoud
    Kashiani, Hossein
    IEEE ACCESS, 2021, 9 : 119049 - 119066
  • [50] Multi-view knowledge distillation for efficient semantic segmentation
    Chen Wang
    Jiang Zhong
    Qizhu Dai
    Yafei Qi
    Fengyuan Shi
    Bin Fang
    Xue Li
    Journal of Real-Time Image Processing, 2023, 20