DenseNet-201 and Xception Pre-Trained Deep Learning Models for Fruit Recognition

被引:32
|
作者
Salim, Farsana [1 ]
Saeed, Faisal [1 ]
Basurra, Shadi [1 ]
Qasem, Sultan Noman [2 ]
Al-Hadhrami, Tawfik [3 ]
机构
[1] Birmingham City Univ, Sch Comp & Digital Technol, Dept Comp & Data Sci, DAAI Res Grp, Birmingham B4 7XG, W Midlands, England
[2] Imam Mohammad Ibn Saud Islamic Univ IMSIU, Coll Comp & Informat Sci, Comp Sci Dept, Riyadh 11432, Saudi Arabia
[3] Nottingham Trent Univ, Sch Sci & Technol, Nottingham NG11 8NS, England
关键词
DenseNet; fruit recognition; food security; MobileNetV3; pre-trained models; ResNet; Xception; VEGETABLE CLASSIFICATION;
D O I
10.3390/electronics12143132
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the dramatic increase of the global population and with food insecurity increasing, it has become a major concern for both individuals and governments to fulfill the need for foods such as vegetables and fruits. Moreover, the desire for the consumption of healthy food, including fruit, has increased the need for applications in the field of agriculture that help to achieve better methods for fruit sorting and fruit disease prediction and classification. Automated fruit recognition is a potential solution to reduce the time and labor required to identify different fruits in situations such as retail stores during checkout, fruit processing centers during sorting, and orchards during harvest. Automating these processes reduces the need for human intervention, making them cheaper, faster, and immune to human error and biases. Past research in the field has focused mainly on the size, shape, and color features of fruits or employed convolutional neural networks (CNNs) for their classification. This study investigates the effectiveness of pre-trained deep learning models for fruit classification using two distinct datasets: Fruits-360 and the Fruit Recognition dataset. Four pre-trained models, DenseNet-201, Xception, MobileNetV3-Small, and ResNet-50, were chosen for the experiments based on their architecture and features. The results show that all models achieved almost 99% accuracy or higher with Fruits-360. With the Fruit Recognition dataset, DenseNet-201 and Xception achieved accuracies of around 98%. The good results exhibited by DenseNet-201 and Xception on both the datasets are remarkable, with DenseNet-201 attaining accuracies of 99.87% and 98.94%, and Xception attaining 99.13% and 97.73% accuracy, respectively, on Fruits-360 and the Fruit Recognition dataset.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Person Re-Identification with Pre-trained Deep Learning Models and Attribute Correlations
    Ngoc-Bao Nguyen
    Vu-Hoang Nguyen
    Tien Do
    Thanh Due Ngo
    2016 INTERNATIONAL SYMPOSIUM ON ELECTRONICS AND SMART DEVICES (ISESD), 2016, : 242 - 247
  • [42] Improved White Blood Cells Classification Based on Pre-trained Deep Learning Models
    Mohamed, Ensaf H.
    El-Behaidy, Wessam H.
    Khoriba, Ghada
    Li, Jie
    JOURNAL OF COMMUNICATIONS SOFTWARE AND SYSTEMS, 2020, 16 (01) : 37 - 45
  • [43] LMPred: predicting antimicrobial peptides using pre-trained language models and deep learning
    Dee, William
    Gromiha, Michael
    BIOINFORMATICS ADVANCES, 2022, 2 (01):
  • [44] The severity level classification of Fusarium wilt of chickpea by pre-trained deep learning models
    Hayit, Tolga
    Endes, Ali
    Hayit, Fatma
    JOURNAL OF PLANT PATHOLOGY, 2024, 106 (01) : 93 - 105
  • [45] Deep Fusing Pre-trained Models into Neural Machine Translation
    Weng, Rongxiang
    Yu, Heng
    Luo, Weihua
    Zhang, Min
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11468 - 11476
  • [46] Transfer Learning Based Yogic Posture Recognition System Using Deep Pre-trained Features
    Arun Kumar Rajendran
    Sibi Chakkaravarthy Sethuraman
    SN Computer Science, 5 (6)
  • [47] On Pre-trained Image Features and Synthetic Images for Deep Learning
    Hinterstoisser, Stefan
    Lepetit, Vincent
    Wohlhart, Paul
    Konolige, Kurt
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT I, 2019, 11129 : 682 - 697
  • [48] Federated Learning from Pre-Trained Models: A Contrastive Learning Approach
    Tan, Yue
    Long, Guodong
    Ma, Jie
    Liu, Lu
    Zhou, Tianyi
    Jiang, Jing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [49] RanPAC: Random Projections and Pre-trained Models for Continual Learning
    McDonnell, Mark D.
    Gong, Dong
    Parveneh, Amin
    Abbasnejad, Ehsan
    van den Hengel, Anton
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] On the Usage of Pre-Trained Speech Recognition Deep Layers to Detect Emotions
    Oliveira, Jorge
    Praca, Isabel
    IEEE ACCESS, 2021, 9 : 9699 - 9705