Diabetic Retinopathy (DR) is the most common cause of eyesight loss that affects millions of people worldwide. Although there are recognized screening procedures for detecting the condition, such as fluorescein angiography and optical coherence tomography, the majority of patients are unaware and fail to have such tests at the proper time. Prompt identification of the condition is critical in avoiding vision loss, which occurs when Diabetes Mellitus (DM) is left untreated for an extended length of time. Several Machine Learning (ML) and Deep Learning (DL) algorithms have been used on DR datasets for disease prediction and classification, however, the majority of them have ignored the element of data pre-processing and dimensionality reduction, which are known as a major gap resulting in biased findings. In the first line of this research, data preprocessing was performed on the color Fundus Photographs (CFPs). Subsequently, we performed feature extraction with Principal Component Analysis (PCA). A Deep Learning Multi-Label Feature Extraction and Classification (ML-FEC) model based on pre-trained Convolutional Neural Network (CNN) architecture was proposed. Then, transfer learning was applied to train a subset of the images using three state-of-the-art CNN architectures, namely, ResNet50, ResNet152, and SqueezeNet1 with parameter-tuning to identify and classify the lesions. The experimental findings revealed an accuracy of 93.67% with a hamming loss of 0.0603 for ResNet 50, an accuracy of 91.94%, and Hamming Loss of 0.0805 for Squeezenet1 and an accuracy of 94.40% with Hamming loss of 0.0560 was achieved by ResNet 152 which demonstrates the suitability of the model for implementation in daily clinical practice and to support large scale DR screening programs. © 2023 The Authors