Trust Region Based Adversarial Attack on Neural Networks

被引:27
|
作者
Yao, Zhewei [1 ]
Gholami, Amir [1 ]
Xu, Peng [2 ]
Keutzer, Kurt [1 ]
Mahoney, Michael W. [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Stanford Univ, Stanford, CA 94305 USA
关键词
D O I
10.1109/CVPR.2019.01161
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks are quite vulnerable to adversarial perturbations. Current state-of-the-art adversarial attack methods typically require very time consuming hyper-parameter tuning, or require many iterations to solve an optimization based adversarial attack. To address this problem, we present a new family of trust region based adversarial attacks, with the goal of computing adversarial perturbations efficiently. We propose several attacks based on variants of the trust region optimization method. We test the proposed methods on Cifar-10 and ImageNet datasets using several different models including AlexNet, ResNet-50, VGG-16, and DenseNet-121 models. Our methods achieve comparable results with the Carlini-Wagner (CW) attack, but with significant speed up of up to 37x, for the VGG-16 model on a Titan Xp GPU. For the case of ResNet-50 on ImageNet, we can bring down its classification accuracy to less than 0.1% with at most 1.5% relative L-infinity (or L-2) perturbation requiring only 1.02 seconds as compared to 27.04 seconds for the CW attack. We have open sourced our method which can be accessed at [1].
引用
收藏
页码:11342 / 11351
页数:10
相关论文
共 50 条
  • [1] ADVERSARIAL WATERMARKING TO ATTACK DEEP NEURAL NETWORKS
    Wang, Gengxing
    Chen, Xinyuan
    Xu, Chang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1962 - 1966
  • [2] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    SYMMETRY-BASEL, 2021, 13 (03):
  • [3] Conformalized Adversarial Attack Detection for Graph Neural Networks
    Ennadir, Sofiane
    Alkhatib, Amr
    Bostrom, Henrik
    Vazirgiannis, Michalis
    CONFORMAL AND PROBABILISTIC PREDICTION WITH APPLICATIONS, VOL 204, 2023, 204 : 311 - 323
  • [4] Cocktail Universal Adversarial Attack on Deep Neural Networks
    Li, Shaoxin
    Li, Xiaofeng
    Che, Xin
    Li, Xintong
    Zhang, Yong
    Chu, Lingyang
    COMPUTER VISION - ECCV 2024, PT LXV, 2025, 15123 : 396 - 412
  • [5] Imperceptible Adversarial Attack via Invertible Neural Networks
    Chen, Zihan
    Wang, Ziyue
    Huang, Jun-Jie
    Zhao, Wentao
    Liu, Xiao
    Guan, Dejian
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 414 - 424
  • [6] A Reliable Approach for Generating Realistic Adversarial Attack via Trust Region-Based Optimization
    Dhamija, Lovi
    Bansal, Urvashi
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2024, 49 (09) : 13203 - 13220
  • [7] ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions
    Zhao, Pu
    Xu, Kaidi
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 499 - 505
  • [8] Dynamic Programming-Based White Box Adversarial Attack for Deep Neural Networks
    Aggarwal, Swati
    Mittal, Anshul
    Aggarwal, Sanchit
    Singh, Anshul Kumar
    AI, 2024, 5 (03) : 1216 - 1234
  • [9] Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem
    Ma, Jiaqi
    Deng, Junwei
    Mei, Qiaozhu
    WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2022, : 675 - 685
  • [10] Task and Model Agnostic Adversarial Attack on Graph Neural Networks
    Sharma, Kartik
    Verma, Samidha
    Medya, Sourav
    Bhattacharya, Arnab
    Ranu, Sayan
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 15091 - 15099