Improving Medical Image Classification in Noisy Labels Using only Self-supervised Pretraining

被引:2
|
作者
Khanal, Bidur [1 ]
Bhattarai, Binod [4 ]
Khanal, Bishesh [3 ]
Linte, Cristian A. [1 ,2 ]
机构
[1] RIT, Ctr Imaging Sci, Rochester, NY 14623 USA
[2] RIT, Biomed Engn, Rochester, NY USA
[3] NepAl Appl Math & Informat Inst Res NAAMII, Patan, Nepal
[4] Univ Aberdeen, Aberdeen, Scotland
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
medical image classification; label noise; learning with noisy labels; self-supervised pretraining; warm-up obstacle; feature extraction;
D O I
10.1007/978-3-031-44992-5_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Noisy labels hurt deep learning-based supervised image classification performance as the models may overfit the noise and learn corrupted feature extractors. For natural image classification training with noisy labeled data, model initialization with contrastive self-supervised pretrained weights has shown to reduce feature corruption and improve classification performance. However, no works have explored: i) how other self-supervised approaches, such as pretext task-based pretraining, impact the learning with noisy label, and ii) any self-supervised pretraining methods alone for medical images in noisy label settings. Medical images often feature smaller datasets and subtle inter-class variations, requiring human expertise to ensure correct classification. Thus, it is not clear if the methods improving learning with noisy labels in natural image datasets such as CIFAR would also help with medical images. In this work, we explore contrastive and pretext task-based selfsupervised pretraining to initialize the weights of a deep learning classification model for two medical datasets with self-induced noisy labels-NCT-CRC-HE-100K tissue histological images and COVID-QUEx chest X-ray images. Our results show that models initialized with pretrained weights obtained from self-supervised learning can effectively learn better features and improve robustness against noisy labels.
引用
收藏
页码:78 / 90
页数:13
相关论文
共 50 条
  • [41] Self-supervised learning based on StyleGAN for medical image classification on small labeled dataset
    Fan, Zong
    Wang, Zhimin
    Zhang, Chaojie
    Ozbey, Muzaffer
    Villa, Umberto
    Hao, Yao
    Zhang, Zhongwei
    Wang, Xiaowei
    Lia, Hua
    MEDICAL IMAGING 2024: IMAGE PROCESSING, 2024, 12926
  • [42] A Masked Self-Supervised Pretraining Method for Face Parsing
    Li, Zhuang
    Cao, Leilei
    Wang, Hongbin
    Xu, Lihong
    MATHEMATICS, 2022, 10 (12)
  • [43] Self-supervised Pretraining Isolated Forest for Outlier Detection
    Liang, Dong
    Wang, Jun
    Gao, Xiaoyu
    Wang, Jiahui
    Zhao, Xiaoyong
    Wang, Lei
    2022 INTERNATIONAL CONFERENCE ON BIG DATA, INFORMATION AND COMPUTER NETWORK (BDICN 2022), 2022, : 306 - 310
  • [44] Self-Supervised Pretraining with DICOM metadata in Ultrasound Imaging
    Hu, Szu-Yeu
    Wang, Shuhang
    Weng, Wei-Hung
    Wang, JingChao
    Wang, XiaoHong
    Ozturk, Arinc
    Li, Qian
    Kumar, Viksit
    Samir, Anthony E.
    MACHINE LEARNING FOR HEALTHCARE CONFERENCE, VOL 126, 2020, 126 : 732 - 748
  • [45] Heuristic Attention Representation Learning for Self-Supervised Pretraining
    Van Nhiem Tran
    Liu, Shen-Hsuan
    Li, Yung-Hui
    Wang, Jia-Ching
    SENSORS, 2022, 22 (14)
  • [46] How Useful is Self-Supervised Pretraining for Visual Tasks?
    Newell, Alejandro
    Deng, Jia
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 7343 - 7352
  • [47] Multimodal Self-supervised Learning for Medical Image Analysis
    Taleb, Aiham
    Lippert, Christoph
    Klein, Tassilo
    Nabi, Moin
    INFORMATION PROCESSING IN MEDICAL IMAGING, IPMI 2021, 2021, 12729 : 661 - 673
  • [48] FactoFormer: Factorized Hyperspectral Transformers With Self-Supervised Pretraining
    Mohamed, Shaheer
    Haghighat, Maryam
    Fernando, Tharindu
    Sridharan, Sridha
    Fookes, Clinton
    Moghadam, Peyman
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 14
  • [49] Trajectory Prediction Method Enhanced by Self-supervised Pretraining
    Li, Linhui
    Fu, Yifan
    Wang, Ting
    Wang, Xuecheng
    Lian, Jing
    Qiche Gongcheng/Automotive Engineering, 2024, 46 (07): : 1219 - 1227
  • [50] Self-supervised pretraining improves the performance of classification of task functional magnetic resonance imaging
    Shi, Chenwei
    Wang, Yanming
    Wu, Yueyang
    Chen, Shishuo
    Hu, Rongjie
    Zhang, Min
    Qiu, Bensheng
    Wang, Xiaoxiao
    FRONTIERS IN NEUROSCIENCE, 2023, 17