Dynamic weighted knowledge distillation for brain tumor segmentation

被引:0
|
作者
An, Dianlong [1 ,2 ]
Liu, Panpan [1 ,2 ]
Feng, Yan [1 ]
Ding, Pengju [1 ,2 ]
Zhou, Weifeng [3 ]
Yu, Bin [2 ,4 ]
机构
[1] Qingdao Univ Sci & Technol, Coll Informat Sci & Technol, Qingdao 266061, Peoples R China
[2] Qingdao Univ Sci & Technol, Sch Data Sci, Qingdao 266061, Peoples R China
[3] Qingdao Univ Sci & Technol, Coll Math & Phys, Qingdao 266061, Peoples R China
[4] Univ Sci & Technol China, Sch Artificial Intelligence & Data Sci, Hefei 230026, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain tumor segmentation; MRI; Static knowledge distillation; Dynamic weighted knowledge distillation; Interpretability;
D O I
10.1016/j.patcog.2024.110731
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic 3D MRI brain tumor segmentation holds a crucial position in the field of medical image analysis, contributing significantly to the clinical diagnosis and treatment of brain tumors. However, traditional 3D brain tumor segmentation methods often entail extensive parameters and computational demands, posing substantial challenges in model training and deployment. To overcome these challenges, this paper introduces a brain tumor segmentation framework based on knowledge distillation. This framework includes training a lightweight network by extracting knowledge from a well-established brain tumor segmentation network. Firstly, this framework replaces the conventional static knowledge distillation (SKD) with the proposed dynamic weighted knowledge distillation (DWKD). DWKD dynamically adjusts the distillation loss weights for each pixel based on the learning state of the student network. Secondly, to enhance the student network's generalization capability, this paper customizes a loss function for DWKD, known as regularized cross-entropy (RCE). RCE introduces controlled noise into the model, enhancing its robustness and diminishing the risk of overfitting. This controlled injection of noise aids in fortifying the model's robustness. Lastly, Empirical validation of the proposed methodology is conducted using two distinct backbone networks, namely Attention U-Net and Residual U-Net. Rigorous experimentation is executed across the BraTS 2019, BraTS 2020, and BraTS 2021 datasets. Experimental results demonstrate that DWKD exhibits significant advantages over SKD in enhancing the segmentation performance of the student network. Furthermore, when dealing with limited training data, the RCE method can further improve the student network's segmentation performance. Additionally, this paper quantitatively analyzes the number of concept detectors identified in network dissection. It assesses the impact of DWKD on model interpretability and finds that compared to SKD, DWKD can more effectively enhance model interpretability. The source code is available at https://github.com/YuBinLab-QUST/DWKD/.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Knowledge Distillation for Brain Tumor Segmentation
    Lachinov, Dmitrii
    Shipunova, Elena
    Turlapov, Vadim
    BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES (BRAINLES 2019), PT II, 2020, 11993 : 324 - 332
  • [2] Efficient Knowledge Distillation for Brain Tumor Segmentation
    Qi, Yuan
    Zhang, Wenyin
    Wang, Xing
    You, Xinya
    Hu, Shunbo
    Chen, Ji
    APPLIED SCIENCES-BASEL, 2022, 12 (23):
  • [3] Brain Tumor Segmentation based on Knowledge Distillation and Adversarial Training
    Hou, Yaqing
    Li, Tianbo
    Zhang, Qiang
    Yu, Hua
    Ge, Hongwei
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [4] Attention-Fused CNN Model Compression with Knowledge Distillation for Brain Tumor Segmentation
    Xu, Pengcheng
    Kim, Kyungsang
    Liu, Huafeng
    Li, Quanzheng
    MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, MIUA 2022, 2022, 13413 : 328 - 338
  • [5] Class Similarity Weighted Knowledge Distillation for Continual Semantic Segmentation
    Minh Hieu Phan
    The-Anh Ta
    Son Lam Phung
    Long Tran-Thanh
    Bouzerdoum, Abdesselam
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16845 - 16854
  • [6] A single stage knowledge distillation network for brain tumor segmentation on limited MR image modalities
    Choi, Yoonseok
    Al-masni, Mohammed A.
    Jung, Kyu-Jin
    Yoo, Roh-Eul
    Lee, Seong-Yeong
    Kim, Dong-Hyun
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 240
  • [7] ABUS tumor segmentation via decouple contrastive knowledge distillation
    Pan, Pan
    Li, Yanfeng
    Chen, Houjin
    Sun, Jia
    Li, Xiaoling
    Cheng, Lin
    PHYSICS IN MEDICINE AND BIOLOGY, 2024, 69 (01):
  • [8] Weighted Knowledge Based Knowledge Distillation
    Kang S.
    Seo K.
    Transactions of the Korean Institute of Electrical Engineers, 2022, 71 (02): : 431 - 435
  • [9] Holistic Weighted Distillation for Semantic Segmentation
    Sun, Wujie
    Chen, Defang
    Wang, Can
    Ye, Deshi
    Feng, Yan
    Chen, Chun
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 396 - 401
  • [10] Weighted self Distillation for Chinese word segmentation
    He, Rian
    Cai, Shubin
    Ming, Zhong
    Zhang, Jialei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 1757 - 1770