Improving Medical Image Classification in Noisy Labels Using only Self-supervised Pretraining

被引:2
|
作者
Khanal, Bidur [1 ]
Bhattarai, Binod [4 ]
Khanal, Bishesh [3 ]
Linte, Cristian A. [1 ,2 ]
机构
[1] RIT, Ctr Imaging Sci, Rochester, NY 14623 USA
[2] RIT, Biomed Engn, Rochester, NY USA
[3] NepAl Appl Math & Informat Inst Res NAAMII, Patan, Nepal
[4] Univ Aberdeen, Aberdeen, Scotland
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
medical image classification; label noise; learning with noisy labels; self-supervised pretraining; warm-up obstacle; feature extraction;
D O I
10.1007/978-3-031-44992-5_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Noisy labels hurt deep learning-based supervised image classification performance as the models may overfit the noise and learn corrupted feature extractors. For natural image classification training with noisy labeled data, model initialization with contrastive self-supervised pretrained weights has shown to reduce feature corruption and improve classification performance. However, no works have explored: i) how other self-supervised approaches, such as pretext task-based pretraining, impact the learning with noisy label, and ii) any self-supervised pretraining methods alone for medical images in noisy label settings. Medical images often feature smaller datasets and subtle inter-class variations, requiring human expertise to ensure correct classification. Thus, it is not clear if the methods improving learning with noisy labels in natural image datasets such as CIFAR would also help with medical images. In this work, we explore contrastive and pretext task-based selfsupervised pretraining to initialize the weights of a deep learning classification model for two medical datasets with self-induced noisy labels-NCT-CRC-HE-100K tissue histological images and COVID-QUEx chest X-ray images. Our results show that models initialized with pretrained weights obtained from self-supervised learning can effectively learn better features and improve robustness against noisy labels.
引用
收藏
页码:78 / 90
页数:13
相关论文
共 50 条
  • [21] Self-supervised Learning for Astronomical Image Classification
    Martinazzo, Ana
    Espadoto, Mateus
    Hirata, Nina S. T.
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 4176 - 4182
  • [22] Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels
    Zheltonozhskii, Evgenii
    Baskin, Chaim
    Mendelson, Avi
    Bronstein, Alex M.
    Litany, Or
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 387 - 397
  • [23] A Novel Self-Supervised Re-labeling Approach for Training with Noisy Labels
    Mandal, Devraj
    Bharadwaj, Shrisha
    Biswas, Soma
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 1370 - 1379
  • [24] INJECTING TEXT IN SELF-SUPERVISED SPEECH PRETRAINING
    Chen, Zhehuai
    Zhang, Yu
    Rosenberg, Andrew
    Ramabhadran, Bhuvana
    Wang, Gary
    Moreno, Pedro
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 251 - 258
  • [25] Self-supervised learning for medical image classification: a systematic review and implementation guidelines
    Shih-Cheng Huang
    Anuj Pareek
    Malte Jensen
    Matthew P. Lungren
    Serena Yeung
    Akshay S. Chaudhari
    npj Digital Medicine, 6
  • [26] SPeCiaL: Self-supervised Pretraining for Continual Learning
    Caccia, Lucas
    Pineau, Joelle
    CONTINUAL SEMI-SUPERVISED LEARNING, CSSL 2021, 2022, 13418 : 91 - 103
  • [27] Self-supervised learning for medical image classification: a systematic review and implementation guidelines
    Huang, Shih-Cheng
    Pareek, Anuj
    Jensen, Malte
    Lungren, Matthew P.
    Yeung, Serena
    Chaudhari, Akshay S.
    NPJ DIGITAL MEDICINE, 2023, 6 (01)
  • [28] Self-supervised learning for medical image analysis using image context restoration
    Chen, Liang
    Bentley, Paul
    Mori, Kensaku
    Misawa, Kazunari
    Fujiwara, Michitaka
    Rueckert, Daniel
    MEDICAL IMAGE ANALYSIS, 2019, 58
  • [29] Nucleus-Aware Self-Supervised Pretraining Using Unpaired Image-to-Image Translation for Histopathology Images
    Song, Zhiyun
    Du, Penghui
    Yan, Junpeng
    Li, Kailu
    Shou, Jianzhong
    Lai, Maode
    Fan, Yubo
    Xu, Yan
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (01) : 459 - 472
  • [30] Self-Supervised Multi-Task Pretraining Improves Image Aesthetic Assessment
    Pfister, Jan
    Kobs, Konstantin
    Hotho, Andreas
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 816 - 825