Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey

被引:46
|
作者
Ozdag, Mesut [1 ]
机构
[1] Univ Cent Florida, 4000 Cent Florida Blvd, Orlando, FL 32816 USA
来源
关键词
deep learning; deep neural network; adversarial examples; security;
D O I
10.1016/j.procs.2018.10.315
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep learning has achieved great successes in various types of applications over recent years. On the other hand, it has been found that deep neural networks (DNNs) can be easily fooled by adversarial input samples. This vulnerability raises major concerns in security-sensitive environments. Therefore, research in attacking and defending DNNs with adversarial examples has drawn great attention. The goal of this paper is to review the types of adversarial attacks and defenses, describe the state-of-the-art methods for each group, and compare their results. In addition, we present some of the top-scored competition submissions for Neural Information Processing Systems (NIPS) in 2017, their solution models, and demonstrate their results. This adversary competition was organized by Google Brain for research scientists to come up with novel solutions that generate adversarial examples and also defend against them. Its contribution is significant on this era of machine learning and DNNs. (C) 2018 The Authors. Published by Elsevier B.V.
引用
收藏
页码:152 / 161
页数:10
相关论文
共 50 条
  • [1] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    [J]. Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [2] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    [J]. PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141
  • [3] A Survey of Attacks and Defenses for Deep Neural Networks
    Machooka, Daniel
    Yuan, Xiaohong
    Esterline, Albert
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE, CSR, 2023, : 254 - 261
  • [4] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [5] Defenses Against Byzantine Attacks in Distributed Deep Neural Networks
    Xia, Qi
    Tao, Zeyi
    Li, Qun
    [J]. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (03): : 2025 - 2035
  • [6] NetFense: Adversarial Defenses Against Privacy Attacks on Neural Networks for Graph Data
    Hsieh, I-Chung
    Li, Cheng-Te
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 796 - 809
  • [7] A Survey on Adversarial Attacks and Defenses for Deep Reinforcement Learning
    Liu A.-S.
    Guo J.
    Li S.-M.
    Xiao Y.-S.
    Liu X.-L.
    Tao D.-C.
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (08): : 1553 - 1576
  • [8] A Survey of Backdoor Attacks and Defenses on Neural Networks
    Wang, Xu-Tong
    Yin, Jie
    Liu, Chao-Ge
    Xu, Chen-Chen
    Huang, Hao
    Wang, Zhi
    Zhang, Fang-Jiao
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (08): : 1713 - 1743
  • [9] Trojan Attacks and Defenses on Deep Neural Networks
    Liu, Yingqi
    [J]. ProQuest Dissertations and Theses Global, 2022,
  • [10] Adversarial attacks and defenses in deep learning for image recognition: A survey
    Wang, Jia
    Wang, Chengyu
    Lin, Qiuzhen
    Luo, Chengwen
    Wu, Chao
    Li, Jianqiang
    [J]. NEUROCOMPUTING, 2022, 514 : 162 - 181