AutoLR: Layer-wise Pruning and Auto-tuning of Learning Rates in Fine-tuning of Deep Networks

被引:0
|
作者
Ro, Younmgin [1 ,2 ]
Choi, Jin Young [1 ]
机构
[1] Seoul Natl Univ, Dept ECE, ASRI, Seoul, South Korea
[2] Samsung SDS, Seoul, South Korea
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing fine-tuning methods use a single learning rate over all layers. In this paper, first, we discuss that trends of layer-wise weight variations by fine-tuning using a single learning rate do not match the well-known notion that lower-level layers extract general features and higher-level layers extract specific features. Based on our discussion, we propose an algorithm that improves fine-tuning performance and reduces network complexity through layer-wise pruning and auto-tuning of layer-wise learning rates. The proposed algorithm has verified the effectiveness by achieving state-of-the-art performance on the image retrieval benchmark datasets (CUB-200, Cars-196, Stanford online product, and Inshop). Code is available at https://github.com/youngminPIL/AutoLR.
引用
收藏
页码:2486 / 2494
页数:9
相关论文
共 50 条
  • [1] Fine-Tuning Deep Neural Networks in Continuous Learning Scenarios
    Kaeding, Christoph
    Rodner, Erik
    Freytag, Alexander
    Denzler, Joachim
    [J]. COMPUTER VISION - ACCV 2016 WORKSHOPS, PT III, 2017, 10118 : 588 - 605
  • [2] Effect of layer-wise fine-tuning in magnification-dependent classification of breast cancer histopathological image
    Shallu Sharma
    Rajesh Mehra
    [J]. The Visual Computer, 2020, 36 : 1755 - 1769
  • [3] Effect of layer-wise fine-tuning in magnification-dependent classification of breast cancer histopathological image
    Sharma, Shallu
    Mehra, Rajesh
    [J]. VISUAL COMPUTER, 2020, 36 (09): : 1755 - 1769
  • [4] Layer-Wise Learning Rate Optimization for Task-Dependent Fine-Tuning of Pre-Trained Models: An Evolutionary Approach
    Bu, Chenyang
    Liu, Yuxin
    Huang, Manzong
    Shao, Jianxuan
    Ji, Shengwei
    Luo, Wenjian
    Wu, Xindong
    [J]. ACM Transactions on Evolutionary Learning and Optimization, 2024, 4 (04):
  • [5] Shallowing Deep Networks: Layer-wise Pruning based on Feature Representations
    Chen, Shi
    Zhao, Qi
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (12) : 3048 - 3056
  • [6] An efficient pruning and fine-tuning method for deep spiking neural network
    L. W. Meng
    G. C. Qiao
    X. Y. Zhang
    J. Bai
    Y. Zuo
    P. J. Zhou
    Y. Liu
    S. G. Hu
    [J]. Applied Intelligence, 2023, 53 : 28910 - 28923
  • [7] An efficient pruning and fine-tuning method for deep spiking neural network
    Meng, L. W.
    Qiao, G. C.
    Zhang, X. Y.
    Bai, J.
    Zuo, Y.
    Zhou, P. J.
    Liu, Y.
    Hu, S. G.
    [J]. APPLIED INTELLIGENCE, 2023, 53 (23) : 28910 - 28923
  • [8] Road-Type Classification through Deep Learning Networks Fine-Tuning
    Saleh, Yaser
    Otoum, Nesreen
    [J]. JOURNAL OF INFORMATION & KNOWLEDGE MANAGEMENT, 2020, 19 (01)
  • [9] Rollback Ensemble With Multiple Local Minima in Fine-Tuning Deep Learning Networks
    Ro, Youngmin
    Choi, Jongwon
    Heo, Byeongho
    Choi, Jin Young
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4648 - 4660
  • [10] Optimizing the Deep Neural Networks by Layer-Wise Refined Pruning and the Acceleration on FPGA
    Li, Hengyi
    Yue, Xuebin
    Wang, Zhichen
    Chai, Zhilei
    Wang, Wenwen
    Tomiyama, Hiroyuki
    Meng, Lin
    [J]. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022