Reinforced Adversarial Attacks on Deep Neural Networks Using ADMM

被引:0
|
作者
Zhao, Pu [1 ]
Xu, Kaidi [1 ]
Zhang, Tianyun [2 ]
Fardad, Makan [2 ]
Wang, Yanzhi [1 ]
Lin, Xue [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
[2] Syracuse Univ, Coll Engn & Comp Sci, Syracuse, NY USA
基金
美国国家科学基金会;
关键词
Deep Neural Networks; Adversarial Attacks; ADMM (Alternating Direction Method of Multipliers);
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As deep learning penetrates into wide application domains, it is essential to evaluate the robustness of deep neural networks (DNNs) under adversarial attacks, especially for some security-critical applications. To better understand the security properties of DNNs, we propose a general framework for constructing adversarial examples, based on ADMM (Alternating Direction Method of Multipliers). This general framework can be adapted to implement L2 and L0 attacks with minor changes. Our ADMM attacks require less distortion for incorrect classification compared with C&W attacks. Our ADMM attack is also able to break defenses such as defensive distillation and adversarial training, and provide strong attack transferability.
引用
收藏
页码:1169 / 1173
页数:5
相关论文
共 50 条
  • [21] Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks
    Liu, Jia
    Jin, Yaochu
    [J]. 2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1778 - 1785
  • [22] Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks
    Krithivasan, Sarada
    Sen, Sanchari
    Raghunathan, Anand
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (11) : 4129 - 4141
  • [23] Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
    Siddique, Ayesha
    Hoque, Khaza Anuarul
    [J]. PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 364 - 369
  • [24] Mitigating Adversarial Attacks for Deep Neural Networks by Input Deformation and Augmentation
    Qiu, Pengfei
    Wang, Qian
    Wang, Dongsheng
    Lyu, Yongqiang
    Lu, Zhaojun
    Qu, Gang
    [J]. 2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 157 - 162
  • [25] Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks
    Wang, Siyue
    Wang, Xiao
    Zhao, Pu
    Wen, Wujie
    Kaeli, David
    Chin, Peter
    Lin, Xue
    [J]. 2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [26] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [27] Simple Black-Box Adversarial Attacks on Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
  • [28] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [29] Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
    Zoppi, Tommaso
    Ceccarelli, Andrea
    [J]. IEEE ACCESS, 2021, 9 : 150579 - 150591
  • [30] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,