Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models

被引:0
|
作者
Minhaj Nur Alam
Rikiya Yamashita
Vignav Ramesh
Tejas Prabhune
Jennifer I. Lim
R. V. P. Chan
Joelle Hallak
Theodore Leng
Daniel Rubin
机构
[1] Stanford University School of Medicine,Department of Biomedical Data Science
[2] University of North Carolina at Charlotte,Department of Electrical and Computer Engineering
[3] University of Illinois at Chicago,Department of Ophthalmology and Visual Sciences
[4] Stanford University School of Medicine,Department of Ophthalmology
[5] Stanford University School of Medicine,Department of Radiology
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Diabetic retinopathy (DR) is a major cause of vision impairment in diabetic patients worldwide. Due to its prevalence, early clinical diagnosis is essential to improve treatment management of DR patients. Despite recent demonstration of successful machine learning (ML) models for automated DR detection, there is a significant clinical need for robust models that can be trained with smaller cohorts of dataset and still perform with high diagnostic accuracy in independent clinical datasets (i.e., high model generalizability). Towards this need, we have developed a self-supervised contrastive learning (CL) based pipeline for classification of referable vs non-referable DR. Self-supervised CL based pretraining allows enhanced data representation, therefore, the development of robust and generalized deep learning (DL) models, even with small, labeled datasets. We have integrated a neural style transfer (NST) augmentation in the CL pipeline to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical datasets from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher area under the receiver operating characteristics (ROC) curve (AUC) (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.
引用
收藏
相关论文
共 50 条
  • [41] Research on Deep Learning-based Detection of Changes in Diabetic Retinopathy Lesions
    Pan, Anning
    Yang, Jingzong
    Shi, Chunchao
    PROCEEDINGS OF 2023 7TH INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION TECHNOLOGY AND COMPUTER ENGINEERING, EITCE 2023, 2023, : 1702 - 1707
  • [42] Dynamic momentum contrastive learning network for diabetic retinopathy grading
    Guo, Yanfei
    Yang, Chenglong
    Du, Hangli
    Zhang, Yuanke
    Ma, Fei
    Yuan, Shasha
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 151
  • [43] Large-Scale Pretraining Improves Sample Efficiency of Active Learning-Based Virtual Screening
    Cao, Zhonglin
    Sciabola, Simone
    Wang, Ye
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2024, 64 (06) : 1882 - 1891
  • [44] Lesion-Aware Contrastive Learning for Diabetic Retinopathy Diagnosis
    Cheng, Shuai
    Hou, Qingshan
    Cao, Peng
    Yang, Jinzhu
    Liu, Xiaoli
    Zaiane, Osmar R.
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT VII, 2023, 14226 : 671 - 681
  • [45] Lesion-Based Contrastive Learning for Diabetic Retinopathy Grading from Fundus Images
    Huang, Yijin
    Lin, Li
    Cheng, Pujin
    Lyu, Junyan
    Tang, Xiaoying
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT II, 2021, 12902 : 113 - 123
  • [46] Boost Supervised Pretraining for Visual Transfer Learning: Implications of Self-Supervised Contrastive Representation Learning
    Sun, Jinghan
    Wei, Dong
    Ma, Kai
    Wang, Liansheng
    Zheng, Yefeng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2307 - 2315
  • [47] Contrastive Learning-Based Semantic Communications
    Tang, Shunpu
    Yang, Qianqian
    Fan, Lisheng
    Lei, Xianfu
    Nallanathan, Arumugam
    Karagiannidis, George K.
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2024, 72 (10) : 6328 - 6343
  • [48] Contrastive Learning-Based Dual Dynamic GCN for SAR Image Scene Classification
    Liu, Fang
    Qian, Xiaoxue
    Jiao, Licheng
    Zhang, Xiangrong
    Li, Lingling
    Cui, Yuanhao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (01) : 390 - 404
  • [49] Deep Learning-Based Feature Representation for AD/MCI Classification
    Suk, Heung-Il
    Shen, Dinggang
    MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2013, PT II, 2013, 8150 : 583 - 590
  • [50] A balanced supervised contrastive learning-based method for encrypted network traffic classification
    Ma, Yuxiang
    Li, Zhaodi
    Xue, Haoming
    Chang, Jike
    COMPUTERS & SECURITY, 2024, 145