Trainable Projected Gradient Method for Robust Fine-tuning

被引:3
|
作者
Tian, Junjiao [1 ]
Dai, Xiaoliang [2 ]
Ma, Chih-Yao [2 ]
He, Zecheng [2 ]
Liu, Yen-Cheng [1 ]
Kira, Zsolt [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Meta, Menlo Pk, CA USA
关键词
D O I
10.1109/CVPR52729.2023.00757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies on transfer learning have shown that selectively fine-tuning a subset of layers or customizing different learning rates for each layer can greatly improve robustness to out-of-distribution (OOD) data and retain generalization capability in the pre-trained models. However, most of these methods employ manually crafted heuristics or expensive hyper-parameter searches, which prevent them from scaling up to large datasets and neural networks. To solve this problem, we propose Trainable Projected Gradient Method (TPGM) to automatically learn the constraint imposed for each layer for a fine-grained fine-tuning regularization. This is motivated by formulating fine-tuning as a bi-level constrained optimization problem. Specifically, TPGM maintains a set of projection radii, i.e., distance constraints between the fine-tuned model and the pre-trained model, for each layer, and enforces them through weight projections. To learn the constraints, we propose a bi-level optimization to automatically learn the best set of projection radii in an end-to-end manner. Theoretically, we show that the bi-level optimization formulation is the key to learning different constraints for each layer. Empirically, with little hyper-parameter search cost, TPGM outperforms existing fine-tuning methods in OOD performance while matching the best in-distribution (ID) performance. For example, when fine-tuned on DomainNet-Real and ImageNet, compared to vanilla fine-tuning, TPGM shows 22% and 10% relative OOD improvement respectively on their sketch counterparts. Code is available at https://github.com/PotatoTian/TPGM.
引用
收藏
页码:7836 / 7845
页数:10
相关论文
共 50 条
  • [31] Fine-tuning the Universe
    Haskell, Merrie
    NATURE, 2009, 461 (7260) : 134 - 134
  • [32] Fine-tuning imperative
    Roessler, D
    INTECH, 2004, 51 (06) : 22 - 26
  • [33] Fine-tuning adaptation
    Jabri, E
    NATURE STRUCTURAL & MOLECULAR BIOLOGY, 2005, 12 (05): : 394 - 394
  • [34] Fine-tuning the Universe
    Merrie Haskell
    Nature, 2009, 461 : 134 - 134
  • [35] STRING FINE-TUNING
    SAVVIDY, GK
    SAVVIDY, KG
    INTERNATIONAL JOURNAL OF MODERN PHYSICS A, 1993, 8 (22): : 3993 - 4011
  • [36] FINE-TUNING SUPERALLOYS
    不详
    ADVANCED MATERIALS & PROCESSES, 2021, 179 (04): : 13 - 13
  • [37] Fine-Tuning the Truth
    Elisabeth E. Elder
    Raimund Jakesz
    World Journal of Surgery, 2011, 35 : 2185 - 2186
  • [38] Fine-Tuning the Future
    Youngner, Stuart J.
    HASTINGS CENTER REPORT, 2010, 40 (03) : 7 - 8
  • [39] Fine-tuning polylactides
    O'Driscoll, C
    CHEMISTRY IN BRITAIN, 2001, 37 (06) : 25 - 26
  • [40] Gauging fine-tuning
    Azhar, Feraz
    Loeb, Abraham
    PHYSICAL REVIEW D, 2018, 98 (10)