Multi-class multi-label ophthalmological disease detection using transfer learning based convolutional neural network

被引:65
|
作者
Gour, Neha [1 ]
Khanna, Pritee [1 ]
机构
[1] PDPM Indian Inst Informat Technol Design & Mfg, Jabalpur, India
关键词
Ophthalmological Disease Detection; Multi-class Classification; Multi-label Classification; Fundus Imaging; Convolutional Neural Networks; RETINAL IMAGES; RISK;
D O I
10.1016/j.bspc.2020.102329
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Fundus imaging is a retinal image modality for capturing anatomical structures and abnormalities in the human eye. Fundus images are the primary tool for observation and detection of a wide range of ophthalmological diseases. Changes in and around the anatomical structures like blood vessels, optic disc, fovea, and macula indicate the presence of disease like diabetic retinopathy, glaucoma, age-related macular degeneration (AMD), myopia, hypertension, and cataract. The patient may be suffering from more than one ophthalmological disease observed in either or both the eyes. Two models are proposed for multi-class multi-label fundus images classification of ophthalmological diseases using transfer learning based convolutional neural network (CNN) approaches. Ocular Disease Intelligent Recognition (ODIR) database having fundus images of left and right eye of patients for eight categories is used for experimentation. Four different pre-trained CNN architectures with two different optimizers are used and it is observed that VGG16 pre-trained architecture with SGD optimizer performs better for multi-class multi-label fundus images classification on ODIR database.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Multi-class multi-label ophthalmological disease detection using transfer learning based convolutional neural network
    Gour, Neha
    Khanna, Pritee
    Biomedical Signal Processing and Control, 2021, 66
  • [2] Latent Semantic Indexing and Convolutional Neural Network for Multi-Label and Multi-Class Text Classification
    Quispe, Oscar
    Ocsa, Alexander
    Coronado, Ricardo
    2017 IEEE LATIN AMERICAN CONFERENCE ON COMPUTATIONAL INTELLIGENCE (LA-CCI), 2017,
  • [3] EEG based multi-class seizure type classification using convolutional neural network and transfer learning
    Raghu, S.
    Sriraam, Natarajan
    Temel, Yasin
    Rao, Shyam Vasudeva
    Kubben, Pieter L.
    NEURAL NETWORKS, 2020, 124 : 202 - 212
  • [4] Image classification of root-trimmed garlic using multi-label and multi-class classification with deep convolutional neural network
    Anh, Pham Thi Quynh
    Thuyet, Dang Quoc
    Kobayashi, Yuichi
    POSTHARVEST BIOLOGY AND TECHNOLOGY, 2022, 190
  • [5] Multi-Class Plant Leaf Disease Detection Using a Deep Convolutional Neural Network
    Jadhav, Shriya
    Lal, Anisha M.
    INTERNATIONAL JOURNAL OF INFORMATION SYSTEM MODELING AND DESIGN, 2022, 13 (01)
  • [6] Multi-Class and Multi-Label Classification Using Associative Pulsing Neural Networks
    Horzyk, Adrian
    Starzyk, Janusz A.
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018, : 427 - 434
  • [7] Multi-Label Classification using Deep Convolutional Neural Network
    Lydia, A. Agnes
    Francis, E. Sagayaraj
    2020 INTERNATIONAL CONFERENCE ON INNOVATIVE TRENDS IN INFORMATION TECHNOLOGY (ICITIIT), 2020,
  • [8] Multi-Label Playlist Classification Using Convolutional Neural Network
    Wang, Guan-Hua
    Chung, Chia-Hao
    Chen, Yian
    Chen, Homer
    2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2018, : 1957 - 1962
  • [9] Multi-label images classification based on convolutional neural network
    Chen M.-S.
    Yu L.-L.
    Su Y.
    Sang A.-J.
    Zhao Y.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2020, 50 (03): : 1077 - 1084
  • [10] Multi-Class Breast Cancer Classification using Deep Learning Convolutional Neural Network
    Nawaz, Majid
    Sewissy, Adel A.
    Soliman, Taysir Hassan A.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2018, 9 (06) : 316 - 322