Assessing Threat of Adversarial Examples on Deep Neural Networks

被引:15
|
作者
Graese, Abigail [1 ]
Rozsa, Andras [1 ]
Boult, Terrance E. [1 ]
机构
[1] Univ Colorado, Vis & Secur Technol VAST Lab, Colorado Springs, CO 80907 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/ICMLA.2016.44
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand- written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
引用
收藏
页码:69 / 74
页数:6
相关论文
共 50 条
  • [1] Assessing the Threat of Adversarial Examples on Deep Neural Networks for Remote Sensing Scene Classification: Attacks and Defenses
    Xu, Yonghao
    Du, Bo
    Zhang, Liangpei
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (02): : 1604 - 1617
  • [2] Robustness of deep neural networks in adversarial examples
    Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [3] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [4] Interpretability Analysis of Deep Neural Networks With Adversarial Examples
    Dong Y.-P.
    Su H.
    Zhu J.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (01): : 75 - 86
  • [5] Compound adversarial examples in deep neural networks q
    Li, Yanchun
    Li, Zhetao
    Zeng, Li
    Long, Saiqin
    Huang, Feiran
    Ren, Kui
    INFORMATION SCIENCES, 2022, 613 : 50 - 68
  • [6] Summary of Adversarial Examples Techniques Based on Deep Neural Networks
    Bai, Zhixu
    Wang, Hengjun
    Guo, Kexiang
    Computer Engineering and Applications, 57 (23): : 61 - 70
  • [7] Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
    Xu, Weilin
    Evans, David
    Qi, Yanjun
    25TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2018), 2018,
  • [8] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    IEEE ACCESS, 2022, 10 : 33602 - 33615
  • [9] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    IEEE Access, 2022, 10 : 33602 - 33615
  • [10] Detecting Adversarial Examples on Deep Neural Networks With Mutual Information Neural Estimation
    Gao, Song
    Wang, Ruxin
    Wang, Xiaoxuan
    Yu, Shui
    Dong, Yunyun
    Yao, Shaowen
    Zhou, Wei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 5168 - 5181