An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

被引:14
|
作者
Zhao, Pu [1 ]
Liu, Sijia [2 ]
Wang, Yanzhi [1 ]
Lin, Xue [1 ]
机构
[1] Northeastern Univ, Dept ECE, Boston, MA 02115 USA
[2] IBM Corp, Res AI, Armonk, NY 10504 USA
基金
美国国家科学基金会;
关键词
Deep Neural Networks; Adversarial Attacks; ADMM (Alternating Direction Method of Multipliers);
D O I
10.1145/3240508.3240639
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. In a successful adversarial attack, the targeted mis-classification should be achieved with the minimal distortion added. In the literature, the added distortions are usually measured by L-0, L-1, L-2, and L-infinity norms, namely, L-0, L-1, L-2, and L-infinity attacks, respectively. However, there lacks a versatile framework for all types of adversarial attacks. This work for the first time unifies the methods of generating adversarial examples by leveraging ADMM (Alternating Direction Method of Multipliers), an operator splitting optimization approach, such that L-0, L-1, L-2, and L-infinity attacks can be effectively implemented by this general framework with little modifications. Comparing with the state-of-the-art attacks in each category, our ADMM-based attacks are so far the strongest, achieving both the 100% attack success rate and the minimal distortion.
引用
收藏
页码:1065 / 1073
页数:9
相关论文
共 50 条
  • [1] Reinforced Adversarial Attacks on Deep Neural Networks Using ADMM
    Zhao, Pu
    Xu, Kaidi
    Zhang, Tianyun
    Fardad, Makan
    Wang, Yanzhi
    Lin, Xue
    [J]. 2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 1169 - 1173
  • [2] Universal adversarial attacks on deep neural networks for medical image classification
    Hokuto Hirano
    Akinori Minagi
    Kazuhiro Takemoto
    [J]. BMC Medical Imaging, 21
  • [3] Universal adversarial attacks on deep neural networks for medical image classification
    Hirano, Hokuto
    Minagi, Akinori
    Takemoto, Kazuhiro
    [J]. BMC MEDICAL IMAGING, 2021, 21 (01)
  • [4] Adversarial Attacks on Deep Neural Networks Based Modulation Recognition
    Liu, Mingqian
    Zhang, Zhenju
    Zhao, Nan
    Chen, Yunfei
    [J]. IEEE INFOCOM 2022 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2022,
  • [5] A LOW-COMPLEXITY ADMM-BASED MASSIVE MIMO DETECTORS VIA DEEP NEURAL NETWORKS
    Tiba, Isayiyas Nigatu
    Zhang, Quan
    Jiang, Jing
    Wang, Yongchao
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 4930 - 4934
  • [6] pdlADMM: An ADMM-based framework for parallel deep learning training with efficiency☆
    Guan, Lei
    Yang, Zhihui
    Li, Dongsheng
    Lu, Xicheng
    [J]. NEUROCOMPUTING, 2021, 435 (435) : 264 - 272
  • [7] Detection of backdoor attacks using targeted universal adversarial perturbations for deep neural networks
    Qu, Yubin
    Huang, Song
    Chen, Xiang
    Wang, Xingya
    Yao, Yongming
    [J]. JOURNAL OF SYSTEMS AND SOFTWARE, 2024, 207
  • [8] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [9] Detecting adversarial example attacks to deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Fumarola, Roberta
    Becarelli, Rudy
    [J]. PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,
  • [10] Generalizing universal adversarial perturbations for deep neural networks
    Yanghao Zhang
    Wenjie Ruan
    Fu Wang
    Xiaowei Huang
    [J]. Machine Learning, 2023, 112 : 1597 - 1626