Dynamic weighted knowledge distillation for brain tumor segmentation

被引:0
|
作者
An, Dianlong [1 ,2 ]
Liu, Panpan [1 ,2 ]
Feng, Yan [1 ]
Ding, Pengju [1 ,2 ]
Zhou, Weifeng [3 ]
Yu, Bin [2 ,4 ]
机构
[1] Qingdao Univ Sci & Technol, Coll Informat Sci & Technol, Qingdao 266061, Peoples R China
[2] Qingdao Univ Sci & Technol, Sch Data Sci, Qingdao 266061, Peoples R China
[3] Qingdao Univ Sci & Technol, Coll Math & Phys, Qingdao 266061, Peoples R China
[4] Univ Sci & Technol China, Sch Artificial Intelligence & Data Sci, Hefei 230026, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain tumor segmentation; MRI; Static knowledge distillation; Dynamic weighted knowledge distillation; Interpretability;
D O I
10.1016/j.patcog.2024.110731
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic 3D MRI brain tumor segmentation holds a crucial position in the field of medical image analysis, contributing significantly to the clinical diagnosis and treatment of brain tumors. However, traditional 3D brain tumor segmentation methods often entail extensive parameters and computational demands, posing substantial challenges in model training and deployment. To overcome these challenges, this paper introduces a brain tumor segmentation framework based on knowledge distillation. This framework includes training a lightweight network by extracting knowledge from a well-established brain tumor segmentation network. Firstly, this framework replaces the conventional static knowledge distillation (SKD) with the proposed dynamic weighted knowledge distillation (DWKD). DWKD dynamically adjusts the distillation loss weights for each pixel based on the learning state of the student network. Secondly, to enhance the student network's generalization capability, this paper customizes a loss function for DWKD, known as regularized cross-entropy (RCE). RCE introduces controlled noise into the model, enhancing its robustness and diminishing the risk of overfitting. This controlled injection of noise aids in fortifying the model's robustness. Lastly, Empirical validation of the proposed methodology is conducted using two distinct backbone networks, namely Attention U-Net and Residual U-Net. Rigorous experimentation is executed across the BraTS 2019, BraTS 2020, and BraTS 2021 datasets. Experimental results demonstrate that DWKD exhibits significant advantages over SKD in enhancing the segmentation performance of the student network. Furthermore, when dealing with limited training data, the RCE method can further improve the student network's segmentation performance. Additionally, this paper quantitatively analyzes the number of concept detectors identified in network dissection. It assesses the impact of DWKD on model interpretability and finds that compared to SKD, DWKD can more effectively enhance model interpretability. The source code is available at https://github.com/YuBinLab-QUST/DWKD/.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] A semisupervised knowledge distillation model for lung nodule segmentation
    Liu, Wenjuan
    Zhang, Limin
    Li, Xiangrui
    Liu, Haoran
    Feng, Min
    Li, Yanxia
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [32] Latent domain knowledge distillation for nighttime semantic segmentation
    Liu, Yunan
    Wang, Simiao
    Wang, Chunpeng
    Lu, Mingyu
    Sang, Yu
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 132
  • [33] Efficient Water Segmentation with Transformer and Knowledge Distillation for USVs
    Zhang, Jingting
    Gao, Jiantao
    Liang, Jinshuo
    Wu, Yiqiang
    Li, Bin
    Zhai, Yang
    Li, Xiaomao
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2023, 11 (05)
  • [34] TransKD: Transformer Knowledge Distillation for Efficient Semantic Segmentation
    Liu, Ruiping
    Yang, Kailun
    Roitberg, Alina
    Zhang, Jiaming
    Peng, Kunyu
    Liu, Huayao
    Wang, Yaonan
    Stiefelhagen, Rainer
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (12) : 20933 - 20949
  • [35] Clinical Knowledge-Based Hybrid Swin Transformer for Brain Tumor Segmentation
    Lei, Xiaoliang
    Yu, Xiaosheng
    Wu, Hao
    Wu, Chengdong
    Zhang, Jingsi
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (03): : 3797 - 3811
  • [36] Lightweight video object segmentation: Integrating online knowledge distillation for fast segmentation
    Hou, Zhiqiang
    Wang, Chenxu
    Ma, Sugang
    Dong, Jiale
    Wang, Yunchen
    Yu, Wangsheng
    Yang, Xiaobao
    KNOWLEDGE-BASED SYSTEMS, 2025, 308
  • [37] Dynamic Knowledge Distillation with Cross-Modality Knowledge Transfer
    Wang, Guangzhi
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2974 - 2978
  • [38] Knowledge distillation in transformers with tripartite attention: Multiclass brain tumor detection in highly augmented MRIs
    Alzahrani, Salha M.
    Qahtani, Abdulrahman M.
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (01)
  • [39] Automatic brain and tumor segmentation
    Moon, N
    Bullitt, E
    van Leemput, K
    Gerig, G
    MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION-MICCAI 2002, PT 1, 2002, 2488 : 372 - 379
  • [40] DeepMedic for Brain Tumor Segmentation
    Kamnitsas, Konstantinos
    Ferrante, Enzo
    Parisot, Sarah
    Ledig, Christian
    Nori, Aditya V.
    Criminisi, Antonio
    Rueckert, Daniel
    Glocker, Ben
    BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, 2016, 2016, 10154 : 138 - 149