FLOAT: Fast Learnable Once-for-All Adversarial Training for Tunable Trade-off between Accuracy and Robustness

被引:2
|
作者
Kundu, Souvik [1 ,2 ]
Sundaresan, Sairam [1 ]
Pedram, Massoud [2 ]
Beerel, Peter A. [2 ]
机构
[1] Intel Labs, Hillsboro, OR 97124 USA
[2] Univ Southern Calif, Los Angeles, CA 90007 USA
关键词
D O I
10.1109/WACV56688.2023.00238
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing models that achieve state-of-the-art (SOTA) performance on both clean and adversarially-perturbed images rely on convolution operations conditioned with feature-wise linear modulation (FiLM) layers. These layers require additional parameters and are hyperparameter sensitive. They significantly increase training time, memory cost, and potential latency which can be costly for resource-limited or real-time applications. In this paper, we present a fast learnable once-for-all adversarial training (FLOAT) algorithm, which instead of the existing FiLMbased conditioning, presents a unique weight conditioned learning that requires no additional layer, thereby incurring no significant increase in parameter count, training time, or network latency compared to standard adversarial training. In particular, we add configurable scaled noise to the weight tensors that enables a trade-off between clean and adversarial performance. Extensive experiments show that FLOAT can yield SOTA performance improving both clean and perturbed image classification by up to similar to 6% and similar to 10%, respectively. Moreover, real hardware measurement shows that FLOAT can reduce the training time by up to 1:43x with fewer model parameters of up to 1:47x on isohyperparameter settings compared to the FiLM-based alternatives. Additionally, to further improve memory efficiency we introduce FLOAT sparse (FLOATS), a form of non-iterative model pruning and provide detailed empirical analysis in yielding a three-way accuracy-robustnesscomplexity trade-off for these new class of pruned conditionally trained models.
引用
收藏
页码:2348 / 2357
页数:10
相关论文
共 17 条
  • [1] Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free
    Wang, Haotao
    Chen, Tianlong
    Gui, Shupeng
    Hu, Ting-Kuei
    Liu, Ji
    Wang, Zhangyang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [2] GAAT: Group Adaptive Adversarial Training to Improve the Trade-Off Between Robustness and Accuracy
    Qian, Yaguan
    Liang, Xiaoyu
    Kang, Ming
    Wang, Bin
    Gu, Zhaoquan
    Wang, Xing
    Wu, Chunming
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (13)
  • [3] On the Trade-off between Adversarial and Backdoor Robustness
    Weng, Cheng-Hsin
    Lee, Yan-Ting
    Wu, Shan-Hung
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [4] Trade-off between Robustness and Accuracy of Vision Transformers
    Li, Yanxi
    Xu, Chang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7558 - 7568
  • [5] Theoretically Principled Trade-off between Robustness and Accuracy
    Zhang, Hongyang
    Yu, Yaodong
    Jiao, Jiantao
    Xing, Eric P.
    El Ghaoui, Laurent
    Jordan, Michael I.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [6] Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks
    Kamath, Sandesh
    Deshpande, Amit
    Subrahmanyam, K. V.
    Balasubramanian, Vineeth N.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] Trade-Off Between Robustness and Rewards Adversarial Training for Deep Reinforcement Learning Under Large Perturbations
    Huang, Jeffrey
    Choi, Ho Jin
    Figueroa, Nadia
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (12) : 8018 - 8025
  • [8] Mitigating Accuracy-Robustness Trade-Off Via Balanced Multi-Teacher Adversarial Distillation
    Zhao S.
    Wang X.
    Wei X.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46 (12) : 1 - 14
  • [9] Perturbation-Invariant Adversarial Training for Neural Ranking Models: Improving the Effectiveness-Robustness Trade-Off
    Liu, Yu-An
    Zhang, Ruqing
    Zhang, Mingkun
    Chen, Wei
    de Rijke, Maarten
    Guo, Jiafeng
    Cheng, Xueqi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8, 2024, : 8832 - 8840
  • [10] Dimensionality Reduction for Data Visualization and Linear Classification, and the Trade-off between Robustness and Classification Accuracy
    Becker, Martin
    Lippel, Jens
    Zielke, Thomas
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 6478 - 6485