ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions

被引:9
|
作者
Zhao, Pu [1 ]
Xu, Kaidi [1 ]
Liu, Sijia [2 ]
Wang, Yanzhi [1 ]
Lin, Xue [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
[2] IBM Res AI, Albany, NY USA
基金
美国国家科学基金会;
关键词
D O I
10.1145/3287624.3288750
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different l(p) norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various l(p) norms of the distortion, including l(0), l(1), l(2), and l(infinity) norms. Thus, the proposed general framework unifies the methods of crafting l(0), l(1), l(2), and l(infinity) attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods.
引用
收藏
页码:499 / 505
页数:7
相关论文
共 50 条
  • [1] ADVERSARIAL WATERMARKING TO ATTACK DEEP NEURAL NETWORKS
    Wang, Gengxing
    Chen, Xinyuan
    Xu, Chang
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1962 - 1966
  • [2] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    [J]. SYMMETRY-BASEL, 2021, 13 (03):
  • [3] AdvAttackVis: An Adversarial Attack Visualization System for Deep Neural Networks
    Ding Wei-jie
    Shen Xuchen
    Yuan Ying
    Mao Ting-yun
    Sun Guo-dao
    Chen Li-li
    Chen Bing-ting
    [J]. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (05) : 383 - 391
  • [4] Priority Adversarial Example in Evasion Attack on Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Choi, Daeseon
    [J]. 2019 1ST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION (ICAIIC 2019), 2019, : 399 - 404
  • [5] Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks
    Liu, Qi
    Liu, Tao
    Wen, Wujie
    [J]. CYBER SENSING 2018, 2018, 10630
  • [6] Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks
    Huang, Lifeng
    Wei, Shuxin
    Gao, Chengying
    Liu, Ning
    [J]. PATTERN RECOGNITION, 2022, 131
  • [7] Query efficient black-box adversarial attack on deep neural networks
    Bai, Yang
    Wang, Yisen
    Zeng, Yuyuan
    Jiang, Yong
    Xia, Shu-Tao
    [J]. PATTERN RECOGNITION, 2023, 133
  • [8] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach
    Wang, Zhibo
    Song, Mengkai
    Zheng, Siyan
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (03) : 1474 - 1488
  • [9] AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack
    Kwon, Hyun
    Lee, Jun
    [J]. IEEE ACCESS, 2024, 12 : 5345 - 5356
  • [10] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach
    Wang, Zhibo
    Song, Mengkai
    Zheng, Siyan
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    [J]. IEEE Transactions on Dependable and Secure Computing, 2021, 18 (03): : 1474 - 1488