Trust Region Based Adversarial Attack on Neural Networks

被引:27
|
作者
Yao, Zhewei [1 ]
Gholami, Amir [1 ]
Xu, Peng [2 ]
Keutzer, Kurt [1 ]
Mahoney, Michael W. [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Stanford Univ, Stanford, CA 94305 USA
关键词
D O I
10.1109/CVPR.2019.01161
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks are quite vulnerable to adversarial perturbations. Current state-of-the-art adversarial attack methods typically require very time consuming hyper-parameter tuning, or require many iterations to solve an optimization based adversarial attack. To address this problem, we present a new family of trust region based adversarial attacks, with the goal of computing adversarial perturbations efficiently. We propose several attacks based on variants of the trust region optimization method. We test the proposed methods on Cifar-10 and ImageNet datasets using several different models including AlexNet, ResNet-50, VGG-16, and DenseNet-121 models. Our methods achieve comparable results with the Carlini-Wagner (CW) attack, but with significant speed up of up to 37x, for the VGG-16 model on a Titan Xp GPU. For the case of ResNet-50 on ImageNet, we can bring down its classification accuracy to less than 0.1% with at most 1.5% relative L-infinity (or L-2) perturbation requiring only 1.02 seconds as compared to 27.04 seconds for the CW attack. We have open sourced our method which can be accessed at [1].
引用
收藏
页码:11342 / 11351
页数:10
相关论文
共 50 条
  • [31] Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation
    Liu, Ganlin
    Huang, Xiaowei
    Yi, Xinping
    COMPUTER VISION - ECCV 2022, PT V, 2022, 13665 : 227 - 243
  • [32] Query efficient black-box adversarial attack on deep neural networks
    Bai, Yang
    Wang, Yisen
    Zeng, Yuyuan
    Jiang, Yong
    Xia, Shu-Tao
    PATTERN RECOGNITION, 2023, 133
  • [33] Dual-Targeted adversarial example in evasion attack on graph neural networks
    Kwon, Hyun
    Kim, Dae-Jin
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [34] RTA3: A real time adversarial attack on recurrent neural networks
    Serrano, Christopher R.
    Sylla, Pape
    Gao, Sicun
    Warren, Michael A.
    2020 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2020), 2020, : 27 - 33
  • [35] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach
    Wang, Zhibo
    Song, Mengkai
    Zheng, Siyan
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    IEEE Transactions on Dependable and Secure Computing, 2021, 18 (03): : 1474 - 1488
  • [36] SPA: An Efficient Adversarial Attack on Spiking Neural Networks using Spike Probabilistic
    Lin, Xuanwei
    Dong, Chen
    Liu, Ximeng
    Zhang, Yuanyuan
    2022 22ND IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING (CCGRID 2022), 2022, : 366 - 375
  • [37] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach
    Wang, Zhibo
    Song, Mengkai
    Zheng, Siyan
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (03) : 1474 - 1488
  • [38] AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack
    Kwon, Hyun
    Lee, Jun
    IEEE ACCESS, 2024, 12 : 5345 - 5356
  • [39] Saliency Map-Based Local White-Box Adversarial Attack Against Deep Neural Networks
    Liu, Haohan
    Zuo, Xingquan
    Huang, Hai
    Wan, Xing
    ARTIFICIAL INTELLIGENCE, CICAI 2022, PT II, 2022, 13605 : 3 - 14
  • [40] GradMDM: Adversarial Attack on Dynamic Networks
    Pan, Jianhong
    Foo, Lin Geng
    Zheng, Qichen
    Fan, Zhipeng
    Rahmani, Hossein
    Ke, Qiuhong
    Liu, Jun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (09) : 11374 - 11381