Combating Medical Label Noise via Robust Semi-supervised Contrastive Learning

被引:1
|
作者
Chen, Bingzhi [1 ]
Ye, Zhanhao [1 ]
Liu, Yishu [2 ]
Zhang, Zheng [2 ]
Pan, Jiahui [1 ]
Zeng, Biqing [1 ]
Lu, Guangming [2 ,3 ]
机构
[1] South China Normal Univ, Sch Software, Guangzhou, Peoples R China
[2] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen, Peoples R China
[3] Guangdong Prov Key Lab Novel Secur Intelligence T, Shenzhen, Peoples R China
关键词
Medical Label Noise; Mixup; Semi-supervised Learning; Contrastive Learning;
D O I
10.1007/978-3-031-43907-0_54
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning-based AI diagnostic models rely heavily on high-quality exhaustive-annotated data for algorithm training but suffer from noisy label information. To enhance the model's robustness and prevent noisy label memorization, this paper proposes a robust Semisupervised Contrastive Learning paradigm called SSCL, which can efficiently merge semi-supervised learning and contrastive learning for combating medical label noise. Specifically, the proposed SSCL framework consists of three well-designed components: the Mixup Feature Embedding (MFE) module, the Semi-supervised Learning (SSL) module, and the Similarity Contrastive Learning (SCL) module. By taking the hybrid augmented images as inputs, the MFE module with momentum update mechanism is designed to mine abstract distributed feature representations. Meanwhile, a flexible pseudo-labeling promotion strategy is introduced into the SSL module, which can refine the supervised information of the noisy data with pseudo-labels based on initial categorical predictions. Benefitting from the measure of similarity between classification distributions, the SCL module can effectively capture more reliable confident pairs, further reducing the effects of label noise on contrastive learning. Furthermore, a noise-robust loss function is also leveraged to ensure the samples with correct labels dominate the learning process. Extensive experiments on multiple benchmark datasets demonstrate the superiority of SSCL over state-of-the-art baselines. The code and pretrained models are publicly available at https://github.com/Binz-Chen/MICCAI2023 SSCL.
引用
收藏
页码:562 / 572
页数:11
相关论文
共 50 条
  • [1] Semi-supervised Contrastive Learning for Label-Efficient Medical Image Segmentation
    Hu, Xinrong
    Zeng, Dewen
    Xu, Xiaowei
    Shi, Yiyu
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT II, 2021, 12902 : 481 - 490
  • [2] A robust semi-supervised learning approach via mixture of label information
    Yang, Yun
    Liu, Xingchen
    [J]. PATTERN RECOGNITION LETTERS, 2015, 68 : 15 - 21
  • [3] Pseudo-label Guided Contrastive Learning for Semi-supervised Medical Image Segmentation
    Basak, Hritam
    Yin, Zhaozheng
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 19786 - 19797
  • [4] Label-Noise Robust Deep Generative Model for Semi-Supervised Learning
    Yoon, Heegeon
    Kim, Heeyoung
    [J]. TECHNOMETRICS, 2023, 65 (01) : 83 - 95
  • [5] Combating Noise: Semi-supervised Learning by Region Uncertainty Quantification
    Wang, Zhenyu
    Li, Yali
    Guo, Ye
    Wang, Shengjin
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [6] HyperMatch: Noise-Tolerant Semi-Supervised Learning via Relaxed Contrastive Constraint
    Zhou, Beitong
    Lu, Jing
    Liu, Kerui
    Xu, Yunlu
    Cheng, Zhanzhan
    Niu, Yi
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24017 - 24026
  • [7] Robust Semi-Supervised Learning through Label Aggregation
    Yan, Yan
    Xu, Zhongwen
    Tsang, Ivor W.
    Long, Guodong
    Yang, Yi
    [J]. THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 2244 - 2250
  • [8] Semi-Supervised SAR ATR Based on Contrastive Learning and Complementary Label Learning
    Li, Chen
    Du, Lan
    Du, Yuang
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21
  • [9] Semi-supervised heterogeneous graph contrastive learning with label-guided
    Li, Chao
    Sun, Guoyi
    Li, Xin
    Shan, Juan
    [J]. APPLIED INTELLIGENCE, 2024, 54 (20) : 10055 - 10071
  • [10] Semi-supervised medical image segmentation via hard positives oriented contrastive learning
    Tang, Cheng
    Zeng, Xinyi
    Zhou, Luping
    Zhou, Qizheng
    Wang, Peng
    Wu, Xi
    Ren, Hongping
    Zhou, Jiliu
    Wang, Yan
    [J]. PATTERN RECOGNITION, 2024, 146