Trainable Projected Gradient Method for Robust Fine-tuning

被引:3
|
作者
Tian, Junjiao [1 ]
Dai, Xiaoliang [2 ]
Ma, Chih-Yao [2 ]
He, Zecheng [2 ]
Liu, Yen-Cheng [1 ]
Kira, Zsolt [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Meta, Menlo Pk, CA USA
关键词
D O I
10.1109/CVPR52729.2023.00757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies on transfer learning have shown that selectively fine-tuning a subset of layers or customizing different learning rates for each layer can greatly improve robustness to out-of-distribution (OOD) data and retain generalization capability in the pre-trained models. However, most of these methods employ manually crafted heuristics or expensive hyper-parameter searches, which prevent them from scaling up to large datasets and neural networks. To solve this problem, we propose Trainable Projected Gradient Method (TPGM) to automatically learn the constraint imposed for each layer for a fine-grained fine-tuning regularization. This is motivated by formulating fine-tuning as a bi-level constrained optimization problem. Specifically, TPGM maintains a set of projection radii, i.e., distance constraints between the fine-tuned model and the pre-trained model, for each layer, and enforces them through weight projections. To learn the constraints, we propose a bi-level optimization to automatically learn the best set of projection radii in an end-to-end manner. Theoretically, we show that the bi-level optimization formulation is the key to learning different constraints for each layer. Empirically, with little hyper-parameter search cost, TPGM outperforms existing fine-tuning methods in OOD performance while matching the best in-distribution (ID) performance. For example, when fine-tuned on DomainNet-Real and ImageNet, compared to vanilla fine-tuning, TPGM shows 22% and 10% relative OOD improvement respectively on their sketch counterparts. Code is available at https://github.com/PotatoTian/TPGM.
引用
收藏
页码:7836 / 7845
页数:10
相关论文
共 50 条
  • [1] Fast Trainable Projection for Robust Fine-Tuning
    Tian, Junjiao
    Liu, Yen-Cheng
    Smith, James Seale
    Kira, Zsolt
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Gradient Sparsification For Masked Fine-Tuning of Transformers
    O'Neill, James
    Dutta, Sourav
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [3] Context-Aware Robust Fine-Tuning
    Mao, Xiaofeng
    Chen, Yufeng
    Jia, Xiaojun
    Zhang, Rong
    Xue, Hui
    Li, Zhao
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (05) : 1685 - 1700
  • [4] Context-Aware Robust Fine-Tuning
    Xiaofeng Mao
    Yufeng Chen
    Xiaojun Jia
    Rong Zhang
    Hui Xue
    Zhao Li
    International Journal of Computer Vision, 2024, 132 : 1685 - 1700
  • [5] Fine-tuning
    不详
    AVIATION WEEK & SPACE TECHNOLOGY, 2001, 155 (02): : 21 - 21
  • [6] Robust fine-tuning of zero-shot models
    Wortsman, Mitchell
    Ilharco, Gabriel
    Kim, Jong Wook
    Li, Mike
    Kornblith, Simon
    Roelofs, Rebecca
    Lopes, Raphael Gontijo
    Hajishirzi, Hannaneh
    Farhadi, Ali
    Namkoong, Hongseok
    Schmidt, Ludwig
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7949 - 7961
  • [7] Fine-Tuning
    Manson, Neil A.
    TPM-THE PHILOSOPHERS MAGAZINE, 2019, (86): : 99 - 105
  • [8] Fine-tuning
    Rachel Smallridge
    Nature Reviews Molecular Cell Biology, 2004, 5 (2) : 79 - 79
  • [9] Fine-tuning
    不详
    MECHANICAL ENGINEERING, 2007, 129 (03) : 23 - 23
  • [10] Masked Images Are Counterfactual Samples for Robust Fine-tuning
    Xiao, Yao
    Tang, Ziyi
    Wei, Pengxu
    Liu, Cong
    Lin, Liang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 20301 - 20310