Trust Region Based Adversarial Attack on Neural Networks

被引:27
|
作者
Yao, Zhewei [1 ]
Gholami, Amir [1 ]
Xu, Peng [2 ]
Keutzer, Kurt [1 ]
Mahoney, Michael W. [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Stanford Univ, Stanford, CA 94305 USA
关键词
D O I
10.1109/CVPR.2019.01161
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks are quite vulnerable to adversarial perturbations. Current state-of-the-art adversarial attack methods typically require very time consuming hyper-parameter tuning, or require many iterations to solve an optimization based adversarial attack. To address this problem, we present a new family of trust region based adversarial attacks, with the goal of computing adversarial perturbations efficiently. We propose several attacks based on variants of the trust region optimization method. We test the proposed methods on Cifar-10 and ImageNet datasets using several different models including AlexNet, ResNet-50, VGG-16, and DenseNet-121 models. Our methods achieve comparable results with the Carlini-Wagner (CW) attack, but with significant speed up of up to 37x, for the VGG-16 model on a Titan Xp GPU. For the case of ResNet-50 on ImageNet, we can bring down its classification accuracy to less than 0.1% with at most 1.5% relative L-infinity (or L-2) perturbation requiring only 1.02 seconds as compared to 27.04 seconds for the CW attack. We have open sourced our method which can be accessed at [1].
引用
收藏
页码:11342 / 11351
页数:10
相关论文
共 50 条
  • [21] Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks
    Alarab, Ismail
    Prakoonwit, Simant
    NEURAL PROCESSING LETTERS, 2022, 54 (03) : 1805 - 1821
  • [22] Priority Adversarial Example in Evasion Attack on Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Choi, Daeseon
    2019 1ST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION (ICAIIC 2019), 2019, : 399 - 404
  • [23] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [24] Adversarial Label-Flipping Attack and Defense for Graph Neural Networks
    Zhang, Mengmei
    Hu, Linmei
    Shi, Chuan
    Wang, Xiao
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 791 - 800
  • [25] Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks
    Liu, Qi
    Liu, Tao
    Wen, Wujie
    CYBER SENSING 2018, 2018, 10630
  • [26] An effective targeted label adversarial attack on graph neural networks by strategically allocating the attack budget
    Cao, Feilong
    Chen, Qiyang
    Ye, Hailiang
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [27] Adversarial attack defense algorithm based on convolutional neural network
    Zhang, Chengyuan
    Wang, Ping
    NEURAL COMPUTING & APPLICATIONS, 2023, 36 (17): : 9723 - 9735
  • [28] A DoS attack detection method based on adversarial neural network
    Li, Yang
    Wu, Haiyan
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [29] Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient
    Liang, Ling
    Hu, Xing
    Deng, Lei
    Wu, Yujie
    Li, Guoqi
    Ding, Yufei
    Li, Peng
    Xie, Yuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (05) : 2569 - 2583
  • [30] Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks
    Huang, Lifeng
    Wei, Shuxin
    Gao, Chengying
    Liu, Ning
    PATTERN RECOGNITION, 2022, 131