Comparison of fine-tuning strategies for transfer learning in medical image classification

被引:10
|
作者
Davila, Ana [1 ]
Colan, Jacinto [2 ]
Hasegawa, Yasuhisa [1 ]
机构
[1] Nagoya Univ, Inst Innovat Future Soc, Furo Cho,Chikusa Ku, Nagoya, Aichi 4648601, Japan
[2] Nagoya Univ, Dept Micronano Mech Sci & Engn, Furo Cho,Chikusa Ku, Nagoya, Aichi 4648603, Japan
基金
日本学术振兴会;
关键词
Medical image analysis; Fine-tuning; Transfer learning; Convolutional neural network; Image classification;
D O I
10.1016/j.imavis.2024.105012
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the context of medical imaging and machine learning, one of the most pressing challenges is the effective adaptation of pre-trained models to specialized medical contexts. Despite the availability of advanced pre-trained models, their direct application to the highly specialized and diverse field of medical imaging often falls short due to the unique characteristics of medical data. This study provides a comprehensive analysis on the performance of various fine-tuning methods applied to pre-trained models across a spectrum of medical imaging domains, including X-ray, MRI, Histology, Dermoscopy, and Endoscopic surgery. We evaluated eight fine-tuning strategies, including standard techniques such as fine-tuning all layers or fine-tuning only the classifier layers, alongside methods such as gradually unfreezing layers, regularization based fine-tuning and adaptive learning rates. We selected three well-established CNN architectures (ResNet-50, DenseNet-121, and VGG-19) to cover a range of learning and feature extraction scenarios. Although our results indicate that the efficacy of these finetuning methods significantly varies depending on both the architecture and the medical imaging type, strategies such as combining Linear Probing with Full Fine-tuning resulted in notable improvements in over 50% of the evaluated cases, demonstrating general effectiveness across medical domains. Moreover, Auto-RGN, which dynamically adjusts learning rates, led to performance enhancements of up to 11% for specific modalities. Additionally, the DenseNet architecture showed more pronounced benefits from alternative fine-tuning approaches compared to traditional full fine-tuning. This work not only provides valuable insights for optimizing pre-trained models in medical image analysis but also suggests the potential for future research into more advanced architectures and fine-tuning methods.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Transfer Learning for Medicinal Plant Leaves Recognition: A Comparison with and without a Fine-Tuning Strategy
    Ayumi, Vina
    Ermatita, Ermatita
    Abdiansah, Abdiansah
    Noprisson, Handrie
    Jumaryadi, Yuwan
    Purba, Mariana
    Utami, Marissa
    Putra, Erwin Dwika
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (09) : 138 - 144
  • [22] Active Learning for Effectively Fine-Tuning Transfer Learning to Downstream Task
    Abul Bashar, Md
    Nayak, Richi
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2021, 12 (02)
  • [23] Transfer Learning Gaussian Anomaly Detection by Fine-tuning Representations
    Rippel, Oliver
    Chavan, Arnav
    Lei, Chucai
    Merhof, Dorit
    IMPROVE: PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND VISION ENGINEERING, 2022, : 45 - 56
  • [24] Fine-tuning pre-trained neural networks for medical image classification in small clinical datasets
    Newton Spolaôr
    Huei Diana Lee
    Ana Isabel Mendes
    Conceição Veloso Nogueira
    Antonio Rafael Sabino Parmezan
    Weber Shoity Resende Takaki
    Claudio Saddy Rodrigues Coy
    Feng Chung Wu
    Rui Fonseca-Pinto
    Multimedia Tools and Applications, 2024, 83 (9) : 27305 - 27329
  • [25] Fine-tuning pre-trained neural networks for medical image classification in small clinical datasets
    Spolaor, Newton
    Lee, Huei Diana
    Mendes, Ana Isabel
    Nogueira, Conceicao Veloso
    Sabino Parmezan, Antonio Rafael
    Resende Takaki, Weber Shoity
    Rodrigues Coy, Claudio Saddy
    Wu, Feng Chung
    Fonseca-Pinto, Rui
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (09) : 27305 - 27329
  • [26] AdaFilter: Adaptive Filter Fine-Tuning for Deep Transfer Learning
    Guo, Yunhui
    Li, Yandong
    Wang, Liqiang
    Rosing, Tajana
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 4060 - 4066
  • [27] RAFNet: Interdomain Representation Alignment and Fine-Tuning for Image Series Classification
    Gong, Maoguo
    Qiao, Wenyuan
    Li, Hao
    Qin, A. K.
    Gao, Tianqi
    Luo, Tianshi
    Xing, Lining
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [28] Medical supervised masked autoencoder: Crafting a better masking strategy and efficient fine-tuning schedule for medical image classification
    Mao, Jiawei
    Guo, Shujian
    Yin, Xuesong
    Chang, Yuanqi
    Nie, Binling
    Wang, Yigang
    APPLIED SOFT COMPUTING, 2025, 169
  • [29] Fine-Tuning Our Treatment Strategies
    Oldham, John
    JOURNAL OF PSYCHIATRIC PRACTICE, 2011, 17 (03) : 157 - 157
  • [30] Retraction Note: Improved transfer learning of CNN through fine-tuning and classifier ensemble for scene classification
    S. Thirumaladevi
    K. Veera Swamy
    M. Sailaja
    Soft Computing, 2024, 28 (Suppl 2) : 981 - 981