GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization

被引:13
|
作者
Lee, Sungyoon [1 ]
Kim, Hoki [2 ]
Lee, Jaewook [2 ]
机构
[1] Korea Inst Adv Study KIAS, Ctr Artificial Intelligence & Nat Sci, Seoul 02455, South Korea
[2] Seoul Natl Univ, Dept Ind Engn, Seoul 08826, South Korea
基金
新加坡国家研究基金会;
关键词
Adversarial robustness; defense against adversarial attacks; randomized neural networks; directional analysis;
D O I
10.1109/TPAMI.2022.3169217
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning is vulnerable to adversarial examples. Many defenses based on randomized neural networks have been proposed to solve the problem, but fail to achieve robustness against attacks using proxy gradients such as the Expectation over Transformation (EOT) attack. We investigate the effect of the adversarial attacks using proxy gradients on randomized neural networks and demonstrate that it highly relies on the directional distribution of the loss gradients of the randomized neural network. We show in particular that proxy gradients are less effective when the gradients are more scattered. To this end, we propose Gradient Diversity (GradDiv) regularizations that minimize the concentration of the gradients to build a robust randomized neural network. Our experiments on MNIST, CIFAR10, and STL10 show that our proposed GradDiv regularizations improve the adversarial robustness of randomized neural networks against a variety of state-of-the-art attack methods. Moreover, our method efficiently reduces the transferability among sample models of randomized neural networks.
引用
收藏
页码:2645 / 2651
页数:7
相关论文
共 50 条
  • [1] Improving the adversarial robustness of quantized neural networks via exploiting the feature diversity
    Chu, Tianshu
    Fang, Kun
    Yang, Jie
    Huang, Xiaolin
    [J]. PATTERN RECOGNITION LETTERS, 2023, 176 : 117 - 122
  • [2] Scaleable input gradient regularization for adversarial robustness
    Finlay, Chris
    Oberman, Adam M.
    [J]. MACHINE LEARNING WITH APPLICATIONS, 2021, 3
  • [3] Towards Robustness of Deep Neural Networks via Regularization
    Li, Yao
    Min, Martin Renqiang
    Lee, Thomas
    Yu, Wenchao
    Kruus, Erik
    Wang, Wei
    Hsieh, Cho-Jui
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7476 - 7485
  • [4] A regularization method to improve adversarial robustness of neural networks for ECG signal classification
    Ma, Linhai
    Liang, Liang
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 144
  • [5] A regularization perspective based theoretical analysis for adversarial robustness of deep spiking neural networks
    Zhang, Hui
    Cheng, Jian
    Zhang, Jun
    Liu, Hongyi
    Wei, Zhihui
    [J]. NEURAL NETWORKS, 2023, 165 : 164 - 174
  • [6] Improving Adversarial Robustness of Detector via Objectness Regularization
    Bao, Jiayu
    Chen, Jiansheng
    Ma, Hongbing
    Ma, Huimin
    Yu, Cheng
    Huang, Yiqing
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PT IV, 2021, 13022 : 252 - 262
  • [7] Adversarial Robustness Via Fisher-Rao Regularization
    Picot, Marine
    Messina, Francisco
    Boudiaf, Malik
    Labeau, Fabrice
    Ayed, Ismail Ben
    Piantanida, Pablo
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 2698 - 2710
  • [8] Improving Adversarial Robustness of Deep Neural Networks via Linear Programming
    Tang, Xiaochao
    Yang, Zhengfeng
    Fu, Xuanming
    Wang, Jianlin
    Zeng, Zhenbing
    [J]. THEORETICAL ASPECTS OF SOFTWARE ENGINEERING, TASE 2022, 2022, 13299 : 326 - 343
  • [9] Improving adversarial robustness of Bayesian neural networks via multi-task adversarial training
    Chen, Xu
    Liu, Chuancai
    Zhao, Yue
    Jia, Zhiyang
    Jin, Ge
    [J]. INFORMATION SCIENCES, 2022, 592 : 156 - 173
  • [10] Improving adversarial robustness of deep neural networks via adaptive margin evolution
    Ma, Linhai
    Liang, Liang
    [J]. NEUROCOMPUTING, 2023, 551