Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey

被引:46
|
作者
Ozdag, Mesut [1 ]
机构
[1] Univ Cent Florida, 4000 Cent Florida Blvd, Orlando, FL 32816 USA
来源
关键词
deep learning; deep neural network; adversarial examples; security;
D O I
10.1016/j.procs.2018.10.315
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep learning has achieved great successes in various types of applications over recent years. On the other hand, it has been found that deep neural networks (DNNs) can be easily fooled by adversarial input samples. This vulnerability raises major concerns in security-sensitive environments. Therefore, research in attacking and defending DNNs with adversarial examples has drawn great attention. The goal of this paper is to review the types of adversarial attacks and defenses, describe the state-of-the-art methods for each group, and compare their results. In addition, we present some of the top-scored competition submissions for Neural Information Processing Systems (NIPS) in 2017, their solution models, and demonstrate their results. This adversary competition was organized by Google Brain for research scientists to come up with novel solutions that generate adversarial examples and also defend against them. Its contribution is significant on this era of machine learning and DNNs. (C) 2018 The Authors. Published by Elsevier B.V.
引用
下载
收藏
页码:152 / 161
页数:10
相关论文
共 50 条
  • [31] Adversarial examples: A survey of attacks and defenses in deep learning-enabled cybersecurity systems
    Macas, Mayra
    Wu, Chunming
    Fuertes, Walter
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [32] Adversarial attacks and defenses in explainable artificial intelligence: A survey
    Baniecki, Hubert
    Biecek, Przemyslaw
    INFORMATION FUSION, 2024, 107
  • [33] Adversarial attacks and defenses in Speaker Recognition Systems: A survey
    Lan, Jiahe
    Zhang, Rui
    Yan, Zheng
    Wang, Jie
    Chen, Yu
    Hou, Ronghui
    JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 127
  • [34] Survey of Attacks and Defenses against SGX
    Zhang, Yahui
    Zhao, Min
    Li, Tingquan
    Han, Huan
    PROCEEDINGS OF 2020 IEEE 5TH INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC 2020), 2020, : 1492 - 1496
  • [35] Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
    Akhtar, Naveed
    Mian, Ajmal
    Kardan, Navid
    Shah, Mubarak
    IEEE ACCESS, 2021, 9 : 155161 - 155196
  • [36] Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural Networks
    Chen, Zitao
    Dash, Pritam
    Pattabiraman, Karthik
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 689 - 703
  • [37] Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks
    Guo, Haoqiang
    Peng, Lu
    Zhang, Jian
    Qi, Fang
    Duan, Lide
    2019 TENTH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2019,
  • [38] Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks
    Das, Nilaksh
    Park, Haekyu
    Wang, Zijie J.
    Hohman, Fred
    Firstman, Robert
    Rogers, Emily
    Chau, Duen Horng
    2020 IEEE VISUALIZATION CONFERENCE - SHORT PAPERS (VIS 2020), 2020, : 271 - 275
  • [39] Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters
    El-Allami, Rida
    Marchisio, Alberto
    Shafique, Muhammad
    Alouani, Ihsen
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 774 - 779
  • [40] Reinforced Adversarial Attacks on Deep Neural Networks Using ADMM
    Zhao, Pu
    Xu, Kaidi
    Zhang, Tianyun
    Fardad, Makan
    Wang, Yanzhi
    Lin, Xue
    2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 1169 - 1173