Improving Medical Image Classification in Noisy Labels Using only Self-supervised Pretraining

被引:2
|
作者
Khanal, Bidur [1 ]
Bhattarai, Binod [4 ]
Khanal, Bishesh [3 ]
Linte, Cristian A. [1 ,2 ]
机构
[1] RIT, Ctr Imaging Sci, Rochester, NY 14623 USA
[2] RIT, Biomed Engn, Rochester, NY USA
[3] NepAl Appl Math & Informat Inst Res NAAMII, Patan, Nepal
[4] Univ Aberdeen, Aberdeen, Scotland
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
medical image classification; label noise; learning with noisy labels; self-supervised pretraining; warm-up obstacle; feature extraction;
D O I
10.1007/978-3-031-44992-5_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Noisy labels hurt deep learning-based supervised image classification performance as the models may overfit the noise and learn corrupted feature extractors. For natural image classification training with noisy labeled data, model initialization with contrastive self-supervised pretrained weights has shown to reduce feature corruption and improve classification performance. However, no works have explored: i) how other self-supervised approaches, such as pretext task-based pretraining, impact the learning with noisy label, and ii) any self-supervised pretraining methods alone for medical images in noisy label settings. Medical images often feature smaller datasets and subtle inter-class variations, requiring human expertise to ensure correct classification. Thus, it is not clear if the methods improving learning with noisy labels in natural image datasets such as CIFAR would also help with medical images. In this work, we explore contrastive and pretext task-based selfsupervised pretraining to initialize the weights of a deep learning classification model for two medical datasets with self-induced noisy labels-NCT-CRC-HE-100K tissue histological images and COVID-QUEx chest X-ray images. Our results show that models initialized with pretrained weights obtained from self-supervised learning can effectively learn better features and improve robustness against noisy labels.
引用
收藏
页码:78 / 90
页数:13
相关论文
共 50 条
  • [1] Progressive Self-Supervised Pretraining for Hyperspectral Image Classification
    Guan, Peiyan
    Lam, Edmund Y.
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 13
  • [2] Self-Supervised Pretraining Improves Self-Supervised Pretraining
    Reed, Colorado J.
    Yue, Xiangyu
    Nrusimha, Ani
    Ebrahimi, Sayna
    Vijaykumar, Vivek
    Mao, Richard
    Li, Bo
    Zhang, Shanghang
    Guillory, Devin
    Metzger, Sean
    Keutzer, Kurt
    Darrell, Trevor
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1050 - 1060
  • [3] COMBINING SELF-SUPERVISED AND SUPERVISED LEARNING WITH NOISY LABELS
    Zhang, Yongqi
    Zhang, Hui
    Yao, Quanming
    Wan, Jun
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 605 - 609
  • [4] Self-Supervised Pretraining of Transformers for Satellite Image Time Series Classification
    Yuan, Yuan
    Lin, Lei
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 474 - 487
  • [5] Learning with Noisy labels via Self-supervised Adversarial Noisy Masking
    Tu, Yuanpeng
    Zhang, Boshen
    Li, Yuxi
    Liu, Liang
    Li, Jian
    Zhang, Jiangning
    Wang, Yabiao
    Wang, Chengjie
    Zhao, Cai Rong
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 16186 - 16195
  • [6] Improving image classification of gastrointestinal endoscopy using curriculum self-supervised learning
    Guo, Han
    Somayajula, Sai Ashish
    Hosseini, Ramtin
    Xie, Pengtao
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [7] Self-supervised speech denoising using only noisy audio signals
    Wu, Jiasong
    Li, Qingchun
    Yang, Guanyu
    Li, Lei
    Senhadji, Lotfi
    Shu, Huazhong
    SPEECH COMMUNICATION, 2023, 149 : 63 - 73
  • [8] A Deeper Look at Sheet Music Composer Classification Using Self-Supervised Pretraining
    Yang, Daniel
    Ji, Kevin
    Tsai, Tj
    APPLIED SCIENCES-BASEL, 2021, 11 (04): : 1 - 16
  • [9] Big Self-Supervised Models Advance Medical Image Classification
    Azizi, Shekoofeh
    Mustafa, Basil
    Ryan, Fiona
    Beaver, Zachary
    Freyberg, Jan
    Deaton, Jonathan
    Loh, Aaron
    Karthikesalingam, Alan
    Kornblith, Simon
    Chen, Ting
    Natarajan, Vivek
    Norouzi, Mohammad
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 3458 - 3468
  • [10] Histopathological Image Classification based on Self-Supervised Vision Transformer and Weak Labels
    Gul, Ahmet Gokberk
    Cetin, Oezdemir
    Reich, Christoph
    Flinner, Nadine
    Prangemeier, Tim
    Koeppl, Heinz
    MEDICAL IMAGING 2022: DIGITAL AND COMPUTATIONAL PATHOLOGY, 2022, 12039