Benchmark analysis of various pre-trained deep learning models on ASSIRA cats and dogs dataset

被引:0
|
作者
Galib Muhammad Shahriar Himel [1 ]
Md. Masudul Islam [2 ]
机构
[1] Universiti Sains Malaysia,School of Computer Sciences
[2] 11800 USM,Department of Computer Science and Engineering
[3] Jahangirnagar University,undefined
关键词
Convolutional neural network; Machine learning; Artificial intelligence; Image classification; Data augmentation; Cats vs dogs;
D O I
10.1007/s43995-024-00094-w
中图分类号
学科分类号
摘要
Image classification using deep learning has gained significant attention, with various datasets available for benchmarking algorithms and pre-trained models. This study focuses on the Microsoft ASIRRA dataset, renowned for its quality and benchmark standards, to compare different pre-trained models. Through experimentation with optimizers, loss functions, and hyperparameters, this research aimed to enhance model performance. Notably, this study achieved significant accuracy improvements with minimal modifications to the training process. Experiments were conducted across three computer architectures, yielding superior accuracy results compared to previous studies on this dataset. The NASNet Large model emerged with the highest accuracy at 99.65%. The findings of this research demonstrate the effectiveness of hyperparameter tuning for renowned pre-trained models, suggesting optimal settings for improved classification accuracy. This study underscores the potential of deep learning approaches in achieving superior performance by hyperparameter tuning for image classification tasks.
引用
收藏
页码:134 / 149
页数:15
相关论文
共 50 条
  • [21] Continual Learning with Pre-Trained Models: A Survey
    Zhou, Da-Wei
    Sun, Hai-Long
    Ning, Jingyi
    Ye, Han-Jia
    Zhan, De-Chuan
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 8363 - 8371
  • [22] An Ensemble Voting Method of Pre-Trained Deep Learning Models for Orchid Recognition
    Ou, Chia-Ho
    Hu, Yi-Nuo
    Jiang, Dong-Jie
    Liao, Po-Yen
    2023 IEEE INTERNATIONAL SYSTEMS CONFERENCE, SYSCON, 2023,
  • [23] Kurdish Sign Language Recognition Using Pre-Trained Deep Learning Models
    Alsaud, Ali A.
    Yousif, Raghad Z.
    Aziz, Marwan. M.
    Kareem, Shahab W.
    Maho, Amer J.
    JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (06) : 1334 - 1344
  • [24] Integration of pre-trained protein language models into geometric deep learning networks
    Wu, Fang
    Wu, Lirong
    Radev, Dragomir
    Xu, Jinbo
    Li, Stan Z.
    COMMUNICATIONS BIOLOGY, 2023, 6 (01)
  • [25] A Performance Comparison of Pre-trained Deep Learning Models to Classify Brain Tumor
    Diker, Aykut
    IEEE EUROCON 2021 - 19TH INTERNATIONAL CONFERENCE ON SMART TECHNOLOGIES, 2021, : 246 - 249
  • [26] Integration of pre-trained protein language models into geometric deep learning networks
    Fang Wu
    Lirong Wu
    Dragomir Radev
    Jinbo Xu
    Stan Z. Li
    Communications Biology, 6
  • [27] Enhancement of Pre-Trained Deep Learning Models to Improve Brain Tumor Classification
    Ullah Z.
    Odeh A.
    Khattak I.
    Hasan M.A.
    Informatica (Slovenia), 2023, 47 (06): : 165 - 172
  • [28] Deep Entity Matching with Pre-Trained Language Models
    Li, Yuliang
    Li, Jinfeng
    Suhara, Yoshihiko
    Doan, AnHai
    Tan, Wang-Chiew
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2020, 14 (01): : 50 - 60
  • [29] Fake content detection on benchmark dataset using various deep learning models
    Thaokar, Chetana
    Rout, Jitendra Kumar
    Das, Himansu
    Rout, Minakhi
    INTERNATIONAL JOURNAL OF COMPUTATIONAL SCIENCE AND ENGINEERING, 2024, 27 (05) : 570 - 581
  • [30] CoderEval: A Benchmark of Pragmatic Code Generation with Generative Pre-trained Models
    Yu, Hao
    Zhang, Jiaxin
    Liang, Guangtai
    Shen, Bo
    Zhang, Qi
    Li, Ying
    Xie, Tao
    Ran, Dezhi
    Ma, Yuchi
    Wang, Qianxiang
    arXiv, 2023,