A semisupervised knowledge distillation model for lung nodule segmentation

被引:0
|
作者
Liu, Wenjuan [1 ]
Zhang, Limin [1 ]
Li, Xiangrui [1 ]
Liu, Haoran [2 ]
Feng, Min [1 ]
Li, Yanxia [1 ]
机构
[1] Dalian Med Univ, Affiliated Hosp 1, Dept Resp & Crit Care Med, Dalian 116021, Peoples R China
[2] Dalian Med Univ, Clin Med, Dalian 116000, Peoples R China
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
关键词
Lung nodule segmentation; Semi-supervised learning; Knowledge distillation; CNN-transformer architecture; Medical image analysis; CANCER; SCANS;
D O I
10.1038/s41598-025-94132-9
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Early screening of lung nodules is mainly done manually by reading the patient's lung CT. This approach is time-consuming laborious and prone to leakage and misdiagnosis. Current methods for lung nodule detection face limitations such as the high cost of obtaining large-scale, high-quality annotated datasets and poor robustness when dealing with data of varying quality. The challenges include accurately detecting small and irregular nodules, as well as ensuring model generalization across different data sources. Therefore, this paper proposes a lung nodule detection model based on semi-supervised learning and knowledge distillation (SSLKD-UNet). In this paper, a feature encoder with a hybrid architecture of CNN and Transformer is designed to fully extract the features of lung nodule images, and at the same time, a distillation training strategy is designed in this paper, which uses the teacher model to instruct the student model to learn the more relevant features to nodule regions in the CT images and, and finally, this paper applies the rough annotation of the lung nodules to the LUNA16 and LC183 dataset with the help of semi-supervised learning idea, and completes the model with the accurate annotation of lung nodules. Combined with the accurate lung nodule annotation to complete the model training process. Further experiments show that the model proposed in this paper can utilize a small amount of inexpensive and easy-to-obtain coarse-grained annotations of pulmonary nodules for training under the guidance of semi-supervised learning and knowledge distillation training strategies, which means inaccurate annotations or incomplete information annotations, e.g., using nodule coordinates instead of pixel-level segmentation masks, and realize the early recognition of lung nodules. The segmentation results further corroborates the model's efficacy, with SSLKD-UNet demonstrating superior delineation of lung nodules, even in cases with complex anatomical structures and varying nodule sizes.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Semantic knowledge distillation for conjunctival goblet cell segmentation
    Jang, Seunghyun
    Seo, Kyungdeok
    Kang, Hyunyoung
    Kim, Seonghan
    Kang, Seungyoung
    Kim, Ki Hean
    Yang, Sejung
    IMAGING, MANIPULATION, AND ANALYSIS OF BIOMOLECULES, CELLS, AND TISSUES XXI, 2023, 12383
  • [42] Efficient Biomedical Instance Segmentation via Knowledge Distillation
    Liu, Xiaoyu
    Hu, Bo
    Huang, Wei
    Zhang, Yueyi
    Xiong, Zhiwei
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT IV, 2022, 13434 : 14 - 24
  • [43] Dynamic weighted knowledge distillation for brain tumor segmentation
    An, Dianlong
    Liu, Panpan
    Feng, Yan
    Ding, Pengju
    Zhou, Weifeng
    Yu, Bin
    PATTERN RECOGNITION, 2024, 155
  • [44] Latent domain knowledge distillation for nighttime semantic segmentation
    Liu, Yunan
    Wang, Simiao
    Wang, Chunpeng
    Lu, Mingyu
    Sang, Yu
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 132
  • [45] Efficient Water Segmentation with Transformer and Knowledge Distillation for USVs
    Zhang, Jingting
    Gao, Jiantao
    Liang, Jinshuo
    Wu, Yiqiang
    Li, Bin
    Zhai, Yang
    Li, Xiaomao
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2023, 11 (05)
  • [46] TransKD: Transformer Knowledge Distillation for Efficient Semantic Segmentation
    Liu, Ruiping
    Yang, Kailun
    Roitberg, Alina
    Zhang, Jiaming
    Peng, Kunyu
    Liu, Huayao
    Wang, Yaonan
    Stiefelhagen, Rainer
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (12) : 20933 - 20949
  • [47] Feature distillation from vision-language model for semisupervised action classification
    Celik, Asli
    Kucukmanisa, Ayhan
    Urhan, Oguzhan
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2023, 31 (06) : 1129 - 1145
  • [48] Lung Nodule CT Image Segmentation Model Based on Multiscale Dense Residual Neural Network
    Zhang, Xinying
    Kong, Shanshan
    Han, Yang
    Xie, Baoshan
    Liu, Chunfeng
    MATHEMATICS, 2023, 11 (06)
  • [49] Lightweight video object segmentation: Integrating online knowledge distillation for fast segmentation
    Hou, Zhiqiang
    Wang, Chenxu
    Ma, Sugang
    Dong, Jiale
    Wang, Yunchen
    Yu, Wangsheng
    Yang, Xiaobao
    KNOWLEDGE-BASED SYSTEMS, 2025, 308
  • [50] Automated Lung Nodule Segmentation Using an Active Contour Model Based on PET/CT Images
    Qiang, Yan
    Zhang, Xiaohui
    Ji, Guohua
    Zhao, Juanjuan
    JOURNAL OF COMPUTATIONAL AND THEORETICAL NANOSCIENCE, 2015, 12 (08) : 1972 - 1976