A semisupervised knowledge distillation model for lung nodule segmentation

被引:0
|
作者
Liu, Wenjuan [1 ]
Zhang, Limin [1 ]
Li, Xiangrui [1 ]
Liu, Haoran [2 ]
Feng, Min [1 ]
Li, Yanxia [1 ]
机构
[1] Dalian Med Univ, Affiliated Hosp 1, Dept Resp & Crit Care Med, Dalian 116021, Peoples R China
[2] Dalian Med Univ, Clin Med, Dalian 116000, Peoples R China
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
关键词
Lung nodule segmentation; Semi-supervised learning; Knowledge distillation; CNN-transformer architecture; Medical image analysis; CANCER; SCANS;
D O I
10.1038/s41598-025-94132-9
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Early screening of lung nodules is mainly done manually by reading the patient's lung CT. This approach is time-consuming laborious and prone to leakage and misdiagnosis. Current methods for lung nodule detection face limitations such as the high cost of obtaining large-scale, high-quality annotated datasets and poor robustness when dealing with data of varying quality. The challenges include accurately detecting small and irregular nodules, as well as ensuring model generalization across different data sources. Therefore, this paper proposes a lung nodule detection model based on semi-supervised learning and knowledge distillation (SSLKD-UNet). In this paper, a feature encoder with a hybrid architecture of CNN and Transformer is designed to fully extract the features of lung nodule images, and at the same time, a distillation training strategy is designed in this paper, which uses the teacher model to instruct the student model to learn the more relevant features to nodule regions in the CT images and, and finally, this paper applies the rough annotation of the lung nodules to the LUNA16 and LC183 dataset with the help of semi-supervised learning idea, and completes the model with the accurate annotation of lung nodules. Combined with the accurate lung nodule annotation to complete the model training process. Further experiments show that the model proposed in this paper can utilize a small amount of inexpensive and easy-to-obtain coarse-grained annotations of pulmonary nodules for training under the guidance of semi-supervised learning and knowledge distillation training strategies, which means inaccurate annotations or incomplete information annotations, e.g., using nodule coordinates instead of pixel-level segmentation masks, and realize the early recognition of lung nodules. The segmentation results further corroborates the model's efficacy, with SSLKD-UNet demonstrating superior delineation of lung nodules, even in cases with complex anatomical structures and varying nodule sizes.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Lung Nodule Segmentation through Unsupervised Clustering Models
    Sivakumar, S.
    Chandrasekar, C.
    INTERNATIONAL CONFERENCE ON MODELLING OPTIMIZATION AND COMPUTING, 2012, 38 : 3064 - 3073
  • [32] DUAL ENCODING FUSION FOR ATYPICAL LUNG NODULE SEGMENTATION
    Xu, Weixin
    Xing, Yun
    Lu, Yuting
    Lin, Jingkai
    Zhang, Xiaohong
    2022 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (IEEE ISBI 2022), 2022,
  • [33] ATROUS CONVOLUTION FOR BINARY SEMANTIC SEGMENTATION OF LUNG NODULE
    Hesamian, Mohammad Hesam
    Jia, Wenjing
    He, Xiangjian
    Kennedy, Paul J.
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1015 - 1019
  • [34] Lung Nodule Segmentation Using Pleural Wall Shape
    Li, Yunfei
    Xie, Xiang
    Li, Guolin
    Wang, Zhihua
    2018 IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE (BIOCAS): ADVANCED SYSTEMS FOR ENHANCING HUMAN HEALTH, 2018, : 101 - 104
  • [35] Feature-based supervised lung nodule segmentation
    Campos, D.M.
    Simões, A.
    Ramos, I.
    Campilho, A.
    IFMBE Proceedings, 2014, 42 : 23 - 26
  • [36] A lightweight crack segmentation network based on knowledge distillation
    Wang, Wenjun
    Su, Chao
    Han, Guohui
    Zhang, Heng
    JOURNAL OF BUILDING ENGINEERING, 2023, 76
  • [37] Knowledge Distillation for Sequence Model
    Huang, Mingkun
    You, Yongbin
    Chen, Zhehuai
    Qian, Yanmin
    Yu, Kai
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 3703 - 3707
  • [38] Structural and Statistical Texture Knowledge Distillation for Semantic Segmentation
    Ji, Deyi
    Wang, Haoran
    Tao, Mingyuan
    Huang, Jianqiang
    Hua, Xian-Sheng
    Lu, Hongtao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16855 - 16864
  • [39] Efficient Medical Image Segmentation Based on Knowledge Distillation
    Qin, Dian
    Bu, Jia-Jun
    Liu, Zhe
    Shen, Xin
    Zhou, Sheng
    Gu, Jing-Jun
    Wang, Zhi-Hua
    Wu, Lei
    Dai, Hui-Fen
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (12) : 3820 - 3831
  • [40] Towards Comparable Knowledge Distillation in Semantic Image Segmentation
    Niemann, Onno
    Vox, Christopher
    Werner, Thorben
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT IV, 2025, 2136 : 185 - 200