Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models

被引:0
|
作者
Minhaj Nur Alam
Rikiya Yamashita
Vignav Ramesh
Tejas Prabhune
Jennifer I. Lim
R. V. P. Chan
Joelle Hallak
Theodore Leng
Daniel Rubin
机构
[1] Stanford University School of Medicine,Department of Biomedical Data Science
[2] University of North Carolina at Charlotte,Department of Electrical and Computer Engineering
[3] University of Illinois at Chicago,Department of Ophthalmology and Visual Sciences
[4] Stanford University School of Medicine,Department of Ophthalmology
[5] Stanford University School of Medicine,Department of Radiology
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Diabetic retinopathy (DR) is a major cause of vision impairment in diabetic patients worldwide. Due to its prevalence, early clinical diagnosis is essential to improve treatment management of DR patients. Despite recent demonstration of successful machine learning (ML) models for automated DR detection, there is a significant clinical need for robust models that can be trained with smaller cohorts of dataset and still perform with high diagnostic accuracy in independent clinical datasets (i.e., high model generalizability). Towards this need, we have developed a self-supervised contrastive learning (CL) based pipeline for classification of referable vs non-referable DR. Self-supervised CL based pretraining allows enhanced data representation, therefore, the development of robust and generalized deep learning (DL) models, even with small, labeled datasets. We have integrated a neural style transfer (NST) augmentation in the CL pipeline to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical datasets from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher area under the receiver operating characteristics (ROC) curve (AUC) (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.
引用
收藏
相关论文
共 50 条
  • [1] Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models
    Alam, Minhaj Nur
    Yamashita, Rikiya
    Ramesh, Vignav
    Prabhune, Tejas
    Lim, Jennifer I.
    Chan, R. V. P.
    Hallak, Joelle
    Leng, Theodore
    Rubin, Daniel
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [2] Contrastive learning improves representation and transferability of diabetic retinopathy classification models
    Alam, Minhaj Nur
    Leng, Theodore
    Hallak, Joelle
    Rubin, Daniel
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)
  • [3] Deep Learning-Based Classification of Diabetic Retinopathy
    Huang, Zhenjia
    PROCEEDINGS OF 2023 4TH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE FOR MEDICINE SCIENCE, ISAIMS 2023, 2023, : 371 - 375
  • [4] Multistructure Contrastive Learning for Pretraining Event Representation
    Zheng, Jianming
    Cai, Fei
    Liu, Jun
    Ling, Yanxiang
    Chen, Honghui
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (01) : 842 - 854
  • [5] ContrastCAD: Contrastive Learning-Based Representation Learning for Computer-Aided Design Models
    Jung, Minseop
    Kim, Minseong
    Kim, Jibum
    IEEE ACCESS, 2024, 12 : 84830 - 84842
  • [6] Novel Framework for Enhanced Learning-based Classification of Lesion in Diabetic Retinopathy
    Prakruthi, M. K.
    Komarasamy, G.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (06) : 37 - 45
  • [7] MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models
    Sowrirajan, Hari
    Yang, Jingbo
    Ng, Andrew Y.
    Rajpurkar, Pranav
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 143, 2021, 143 : 728 - 744
  • [8] Deep Learning-Based Diabetic Retinopathy Severity Classification and Progression Time Estimation
    Shivappriya, S. N.
    Alagumeenaakshi, M.
    Sasikala, S.
    IFAC PAPERSONLINE, 2024, 58 (03): : 78 - 83
  • [9] Robust Classification Model for Diabetic Retinopathy Based on the Contrastive Learning Method with a Convolutional Neural Network
    Feng, Xinxing
    Zhang, Shuai
    Xu, Long
    Huang, Xin
    Chen, Yanyan
    APPLIED SCIENCES-BASEL, 2022, 12 (23):
  • [10] Feature attention improves the performance of a transfer learning-based model in detecting diabetic retinopathy
    Abdolahi, Farzan
    Leahy, Sophie
    Rahimi, Mansour
    Dasmohapatra, Soumyaprakash
    Rostami, Mohammad
    Shahidi, Mahnaz
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2023, 64 (08)