Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization

被引:0
|
作者
Fazlyab, Mahyar [1 ]
Entesari, Taha [1 ]
Roy, Aniket [1 ]
Chellappa, Rama [1 ]
机构
[1] Johns Hopkins Univ, Dept Elect & Comp Engn, Baltimore, MD 21218 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To improve the robustness of deep classifiers against adversarial perturbations, many approaches have been proposed, such as designing new architectures with better robustness properties (e.g., Lipschitz-capped networks), or modifying the training process itself (e.g., min-max optimization, constrained learning, or regularization). These approaches, however, might not be effective at increasing the margin in the input (feature) space. In this paper, we propose a differentiable regularizer that is a lower bound on the distance of the data points to the classification boundary. The proposed regularizer requires knowledge of the model's Lipschitz constant along certain directions. To this end, we develop a scalable method for calculating guaranteed differentiable upper bounds on the Lipschitz constant of neural networks accurately and efficiently. The relative accuracy of the bounds prevents excessive regularization and allows for more direct manipulation of the decision boundary. Furthermore, our Lipschitz bounding algorithm exploits the monotonicity and Lipschitz continuity of the activation layers, and the resulting bounds can be used to design new layers with controllable bounds on their Lipschitz constant. Experiments on the MNIST, CIFAR-10, and Tiny-ImageNet data sets verify that our proposed algorithm obtains competitively improved results compared to the state-of-the-art.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Boosting certified robustness via an expectation-based similarity regularization
    Li, Jiawen
    Fang, Kun
    Huang, Xiaolin
    Yang, Jie
    [J]. Image and Vision Computing, 2024, 151
  • [2] Robustness margin maximization for inaccurate controller implementation
    Kobayashi, Y
    Asai, T
    [J]. ACC: PROCEEDINGS OF THE 2005 AMERICAN CONTROL CONFERENCE, VOLS 1-7, 2005, : 4000 - 4005
  • [3] Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective
    Zhang, Bohang
    Jiang, Du
    He, Di
    Wang, Liwei
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [4] DRF: Improving Certified Robustness via Distributional Robustness Framework
    Wang, Zekai
    Zhou, Zhengyu
    Liu, Weiwei
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 14, 2024, : 15752 - 15760
  • [5] Mitigating Transformer Overconfidence via Lipschitz Regularization
    Ye, Wenqian
    Ma, Yunsheng
    Cao, Xu
    Tang, Kun
    [J]. UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2422 - 2432
  • [6] Certified Adversarial Robustness via Randomized Smoothing
    Cohen, Jeremy
    Rosenfeld, Elan
    Kolter, J. Zico
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [7] Average neighborhood margin maximization projection with smooth regularization for face recognition
    Liu, Xiao-Ming
    Wang, Zhao-Hui
    Feng, Zhi-Lin
    [J]. PROCEEDINGS OF 2008 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2008, : 401 - +
  • [8] Fast Margin Maximization via Dual Acceleration
    Ji, Ziwei
    Srebro, Nathan
    Telgarsky, Matus
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [9] Multiple instance learning via margin maximization
    Kundakcioglu, O. Erhun
    Seref, Onur
    Pardalos, Panos M.
    [J]. APPLIED NUMERICAL MATHEMATICS, 2010, 60 (04) : 358 - 369
  • [10] Ensemble Pruning via Quadratic Margin Maximization
    Martinez, Waldyn G.
    [J]. IEEE ACCESS, 2021, 9 : 48931 - 48951