ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions

被引:9
|
作者
Zhao, Pu [1 ]
Xu, Kaidi [1 ]
Liu, Sijia [2 ]
Wang, Yanzhi [1 ]
Lin, Xue [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
[2] IBM Res AI, Albany, NY USA
基金
美国国家科学基金会;
关键词
D O I
10.1145/3287624.3288750
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different l(p) norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various l(p) norms of the distortion, including l(0), l(1), l(2), and l(infinity) norms. Thus, the proposed general framework unifies the methods of crafting l(0), l(1), l(2), and l(infinity) attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods.
引用
收藏
页码:499 / 505
页数:7
相关论文
共 50 条
  • [31] A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability?
    Huang, Xiaowei
    Kroening, Daniel
    Ruan, Wenjie
    Sharp, James
    Sun, Youcheng
    Thamo, Emese
    Wu, Min
    Yi, Xinping
    [J]. COMPUTER SCIENCE REVIEW, 2020, 37
  • [32] Frequency constraint-based adversarial attack on deep neural networks for medical image classification
    Chen, Fang
    Wang, Jian
    Liu, Han
    Kong, Wentao
    Zhao, Zhe
    Ma, Longfei
    Liao, Hongen
    Zhang, Daoqiang
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 164
  • [33] DTFA: Adversarial attack with discrete cosine transform noise and target features on deep neural networks
    Yang, Dong
    Chen, Wei
    Wei, Songjie
    [J]. IET IMAGE PROCESSING, 2023, 17 (05) : 1464 - 1477
  • [34] An adversarial attack detection method in deep neural networks based on re-attacking approach
    Morteza Ali Ahmadi
    Rouhollah Dianat
    Hossein Amirkhani
    [J]. Multimedia Tools and Applications, 2021, 80 : 10985 - 11014
  • [35] A CMA-ES-Based Adversarial Attack on Black-Box Deep Neural Networks
    Kuang, Xiaohui
    Liu, Hongyi
    Wang, Ye
    Zhang, Qikun
    Zhang, Quanxin
    Zheng, Jun
    [J]. IEEE ACCESS, 2019, 7 : 172938 - 172947
  • [36] An adversarial attack detection method in deep neural networks based on re-attacking approach
    Ahmadi, Morteza Ali
    Dianat, Rouhollah
    Amirkhani, Hossein
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (07) : 10985 - 11014
  • [37] Adversarial Attack and Defense in Deep Ranking
    Zhou, Mo
    Wang, Le
    Niu, Zhenxing
    Zhang, Qilin
    Zheng, Nanning
    Hua, Gang
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5306 - 5324
  • [38] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [39] One Pixel Attack for Fooling Deep Neural Networks
    Su, Jiawei
    Vargas, Danilo Vasconcellos
    Sakurai, Kouichi
    [J]. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2019, 23 (05) : 828 - 841
  • [40] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    [J]. CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633