Comparing Deep Feature Extraction Strategies for Diabetic Retinopathy Stage Classification from Fundus Images

被引:3
|
作者
Mukherjee, Nilarun [1 ]
Sengupta, Souvik [2 ]
机构
[1] Bengal Inst Technol, Kolkata, India
[2] Aliah Univ, Kolkata, India
关键词
Diabetic retinopathy; DR severity grading; DR screening; Referable DR; Machine learning; Deep learning; CNN; CONVOLUTIONAL NEURAL-NETWORKS; PREVALENCE; DIAGNOSIS;
D O I
10.1007/s13369-022-07547-1
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Diabetic retinopathy (DR) is the damage to the micro-vascular system in the retina, due to prolonged diabetes mellitus. Diagnosis and treatment of DR entail screening of retinal fundus images of diabetic patients. The manual inspection of pathological changes in retinal images is a skill-based task that involves lots of effort and time. Therefore, computer-aided detection and diagnosis of DR have been extensively explored for the past few decades. In recent years with the development of different benchmark deep convolutional neural networks (CNN), deep learning and machine learning have been efficiently and effectively adapted to different DR classification tasks. The success of CNNs largely relies on how good they are in extracting discriminative features from the fundus images. However, to the best of our knowledge, till date no study has been conducted to evaluate the feature extraction capabilities of all the benchmark CNNs to support the DR classification tasks and to find the best training-hyper-parameters for each of them in fundus retinal image-based DR classification tasks. In this work, we try to find the best benchmark CNN, which can be used as the backbone feature extractor for the DR classification tasks using fundus retinal images. We also aim to find the optimal hyper-parameters for training each of the benchmark CNN family, particularly when they are applied to the DR gradation tasks using retinal image datasets with huge class-imbalance and limited samples of higher severity classes. To address the cause, we conduct a detailed comprehensive comparative study on the performances of almost all the benchmark CNNs and their variants proposed during 2014 to 2019, for the DR gradation tasks on common standard retinal datasets. We have also conducted a comprehensive optimal training hyper-parameter search for each of the benchmark CNN family for the fundus image-based DR classification tasks. The benchmark CNNs are transfer learned and end-to-end trained in an incremental fashion on a class-balanced dataset curated from the train set of the EyePACS dataset. The benchmark models are evaluated on APTOS, MESSIDOR-1, and MESSIDOR-2 datasets to test their cross-dataset generalization. Experimental results show that features extracted by EfficientNetB1 have outperformed features of all the other CNN models in DR classification tasks on all three test datasets. MobileNet-V3-Large also shows promising performance on MESSIDOR-1 dataset. The success of EfficientNetB1 and MobileNet-V3-Large indicates that comparatively shallower and light-weighted CNNs tend to extract more discriminative and expressive features from fundus images for DR stage detection. In future, researchers can explore different preprocessing and post-processing techniques and incorporate novel architectural components on these networks to further improve the classification accuracy and robustness.
引用
收藏
页码:10335 / 10354
页数:20
相关论文
共 50 条
  • [21] Detection of Lesions and Classification of Diabetic Retinopathy Using Fundus Images
    Paing, May Phu
    Choomchuay, Somsak
    Yodprom, Rapeeporn
    [J]. 2016 9TH BIOMEDICAL ENGINEERING INTERNATIONAL CONFERENCE (BMEICON), 2016,
  • [22] Local Binary CNN for Diabetic Retinopathy Classification on Fundus Images
    Macsik, Peter
    Pavlovicova, Jarmila
    Goga, Jozef
    Kajan, Slavomir
    [J]. ACTA POLYTECHNICA HUNGARICA, 2022, 19 (07) : 27 - 45
  • [23] Automated detection and classification of fundus diabetic retinopathy images using synergic deep learning model
    Shankar, K.
    Sait, Abdul Rahaman Wahab
    Gupta, Deepak
    Lakshmanaprabu, S. K.
    Khanna, Ashish
    Pandey, Hari Mohan
    [J]. PATTERN RECOGNITION LETTERS, 2020, 133 : 210 - 216
  • [24] Improving the Curvelet Saliency and Deep Convolutional Neural Networks for Diabetic Retinopathy Classification in Fundus Images
    Vo Thi Hong Tuyet
    Nguyen Thanh Binh
    Tin, Dang Thanh
    [J]. ENGINEERING TECHNOLOGY & APPLIED SCIENCE RESEARCH, 2022, 12 (01) : 8204 - 8209
  • [25] Diabetic retinopathy classification using a novel DAG network based on multi-feature of fundus images
    Fang, Lingling
    Qiao, Huan
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 77
  • [26] Deep learning enabled optimized feature selection and classification for grading diabetic retinopathy severity in the fundus image
    Dayana, A. Mary
    Emmanuel, W. R. Sam
    [J]. Neural Computing and Applications, 2022, 34 (21): : 18663 - 18683
  • [27] Deep learning enabled optimized feature selection and classification for grading diabetic retinopathy severity in the fundus image
    Dayana, A. Mary
    Emmanuel, W. R. Sam
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (21): : 18663 - 18683
  • [28] Deep learning enabled optimized feature selection and classification for grading diabetic retinopathy severity in the fundus image
    A. Mary Dayana
    W. R. Sam Emmanuel
    [J]. Neural Computing and Applications, 2022, 34 : 18663 - 18683
  • [29] Proliferative Diabetic Retinopathy Classification from Retinal Fundus Images Using Fractal Analysis
    Taris, Gusna Naufal
    Handayani, Astri
    Mengko, Tati Latifah
    Hermanto, Beni Rio
    [J]. 2021 IEEE REGION 10 SYMPOSIUM (TENSYMP), 2021,
  • [30] Dataset from fundus images for the study of diabetic retinopathy
    Benitez, Veronica Elisa Castillo
    Matto, Ingrid Castro
    Roman, Julio Cesar Mello
    Noguera, Jose Luis Vazquez
    Garcia-Torres, Miguel
    Ayala, Jordan
    Pinto-Roa, Diego P.
    Gardel-Sotomayor, Pedro E.
    Facon, Jacques
    Grillo, Sebastian Alberto
    [J]. DATA IN BRIEF, 2021, 36