Assessing Threat of Adversarial Examples on Deep Neural Networks

被引:15
|
作者
Graese, Abigail [1 ]
Rozsa, Andras [1 ]
Boult, Terrance E. [1 ]
机构
[1] Univ Colorado, Vis & Secur Technol VAST Lab, Colorado Springs, CO 80907 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/ICMLA.2016.44
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand- written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
引用
收藏
页码:69 / 74
页数:6
相关论文
共 50 条
  • [31] Adversarial Examples Against Deep Neural Network based Steganalysis
    Zhang, Yiwei
    Zhang, Weiming
    Chen, Kejiang
    Liu, Jiayang
    Liu, Yujia
    Yu, Nenghai
    PROCEEDINGS OF THE 6TH ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY (IH&MMSEC'18), 2018, : 67 - 72
  • [32] Enhancing Adversarial Examples on Deep Q Networks with Previous Information
    Sooksatra, Korn
    Rivas, Pablo
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [33] Pruning Adversarially Robust Neural Networks without Adversarial Examples
    Jian, Tong
    Wang, Zifeng
    Wang, Yanzhi
    Dy, Jennifer
    Ioannidis, Stratis
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 993 - 998
  • [34] Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics
    Li, Xin
    Li, Fuxin
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5775 - 5783
  • [35] Examining the Proximity of Adversarial Examples to Class Manifolds in Deep Networks
    Pocos, Stefan
    Beckova, Iveta
    Farkas, Igor
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 645 - 656
  • [36] Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks
    Barati, Ramin
    Safabakhsh, Reza
    Rahmati, Mohammad
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7036 - 7042
  • [37] Explaining Adversarial Examples by Local Properties of Convolutional Neural Networks
    Aghdam, Hamed H.
    Heravi, Elnaz J.
    Puig, Domenec
    PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 5, 2017, : 226 - 234
  • [38] Simplicial-Map Neural Networks Robust to Adversarial Examples
    Paluzo-Hidalgo, Eduardo
    Gonzalez-Diaz, Rocio
    Gutierrez-Naranjo, Miguel A.
    Heras, Jonathan
    MATHEMATICS, 2021, 9 (02) : 1 - 16
  • [39] NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
    Li, Yandong
    Li, Lijun
    Wang, Liqiang
    Zhang, Tong
    Gong, Boqing
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [40] Generating Adversarial Examples with Adversarial Networks
    Xiao, Chaowei
    Li, Bo
    Zhu, Jun-Yan
    He, Warren
    Liu, Mingyan
    Song, Dawn
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3905 - 3911