Benchmark analysis of various pre-trained deep learning models on ASSIRA cats and dogs dataset

被引:0
|
作者
Galib Muhammad Shahriar Himel [1 ]
Md. Masudul Islam [2 ]
机构
[1] Universiti Sains Malaysia,School of Computer Sciences
[2] 11800 USM,Department of Computer Science and Engineering
[3] Jahangirnagar University,undefined
关键词
Convolutional neural network; Machine learning; Artificial intelligence; Image classification; Data augmentation; Cats vs dogs;
D O I
10.1007/s43995-024-00094-w
中图分类号
学科分类号
摘要
Image classification using deep learning has gained significant attention, with various datasets available for benchmarking algorithms and pre-trained models. This study focuses on the Microsoft ASIRRA dataset, renowned for its quality and benchmark standards, to compare different pre-trained models. Through experimentation with optimizers, loss functions, and hyperparameters, this research aimed to enhance model performance. Notably, this study achieved significant accuracy improvements with minimal modifications to the training process. Experiments were conducted across three computer architectures, yielding superior accuracy results compared to previous studies on this dataset. The NASNet Large model emerged with the highest accuracy at 99.65%. The findings of this research demonstrate the effectiveness of hyperparameter tuning for renowned pre-trained models, suggesting optimal settings for improved classification accuracy. This study underscores the potential of deep learning approaches in achieving superior performance by hyperparameter tuning for image classification tasks.
引用
收藏
页码:134 / 149
页数:15
相关论文
共 50 条
  • [31] PeaTMOSS: A Dataset and Initial Analysis of Pre-Trained Models in Open-Source Software
    Jiang, Wenxin
    Yasmin, Jerin
    Jones, Jason
    Synovic, Nicholas
    Kuo, Jiashen
    Bielanski, Nathaniel
    Tian, Yuan
    Thiruvathukal, George K.
    Davis, James C.
    2024 IEEE/ACM 21ST INTERNATIONAL CONFERENCE ON MINING SOFTWARE REPOSITORIES, MSR, 2024, : 431 - 443
  • [32] Cross-Dataset Continual Learning: Assessing Pre-Trained Models to Enhance Generalization in HAR
    Kann, Bonpagna
    Castellanos-Paez, Sandra
    Lalanda, Philippe
    Sam, Sethserey
    2024 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS AND OTHER AFFILIATED EVENTS, PERCOM WORKSHOPS, 2024,
  • [33] AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models
    Madabushi, Harish Tayyar
    Gow-Smith, Edward
    Scarton, Carolina
    Villavicencio, Aline
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3464 - 3477
  • [34] An Open Medical Platform to Share Source Code and Various Pre-Trained Weights for Models to Use in Deep Learning Research
    Kim, Sungchul
    Cho, Sungman
    Cho, Kyungjin
    Seo, Jiyeon
    Nam, Yujin
    Park, Jooyoung
    Kim, Kyuri
    Kim, Daeun
    Hwang, Jeongeun
    Yun, Jihye
    Jang, Miso
    Lee, Hyunna
    Kim, Namkug
    KOREAN JOURNAL OF RADIOLOGY, 2021, 22 (12) : 2073 - 2081
  • [35] Towards Inadequately Pre-trained Models in Transfer Learning
    Deng, Andong
    Li, Xingjian
    Hu, Di
    Wang, Tianyang
    Xiong, Haoyi
    Xu, Cheng-Zhong
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 19340 - 19351
  • [36] Transfer learning with pre-trained conditional generative models
    Yamaguchi, Shin'ya
    Kanai, Sekitoshi
    Kumagai, Atsutoshi
    Chijiwa, Daiki
    Kashima, Hisashi
    MACHINE LEARNING, 2025, 114 (04)
  • [37] The severity level classification of Fusarium wilt of chickpea by pre-trained deep learning models
    Tolga Hayit
    Ali Endes
    Fatma Hayit
    Journal of Plant Pathology, 2024, 106 (1) : 93 - 105
  • [38] DenseNet-201 and Xception Pre-Trained Deep Learning Models for Fruit Recognition
    Salim, Farsana
    Saeed, Faisal
    Basurra, Shadi
    Qasem, Sultan Noman
    Al-Hadhrami, Tawfik
    ELECTRONICS, 2023, 12 (14)
  • [39] Active Learning for Sequence Tagging with Deep Pre-trained Models and Bayesian Uncertainty Estimates
    Shelmanov, Artem
    Puzyrev, Dmitri
    Kupriyanova, Lyubov
    Belyakov, Denis
    Larionov, Daniil
    Khromov, Nikita
    Kozlova, Olga
    Artemova, Ekaterina
    Dylov, Dmitry, V
    Panchenko, Alexander
    16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 1698 - 1712
  • [40] Active Learning with Deep Pre-trained Models for Sequence Tagging of Clinical and Biomedical Texts
    Shelmanov, Artem
    Liventsev, Vadini
    Kireev, Danil
    Khromov, Nikita
    Panchenko, Alexander
    Fedulova, Irina
    Dylov, Dmitry, V
    2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2019, : 482 - 489