Assessing Threat of Adversarial Examples on Deep Neural Networks

被引:15
|
作者
Graese, Abigail [1 ]
Rozsa, Andras [1 ]
Boult, Terrance E. [1 ]
机构
[1] Univ Colorado, Vis & Secur Technol VAST Lab, Colorado Springs, CO 80907 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/ICMLA.2016.44
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand- written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
引用
收藏
页码:69 / 74
页数:6
相关论文
共 50 条
  • [21] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    EURASIP Journal on Information Security, 2020
  • [22] Deep neural rejection against adversarial examples
    Sotgiu, Angelo
    Demontis, Ambra
    Melis, Marco
    Biggio, Battista
    Fumera, Giorgio
    Feng, Xiaoyi
    Roli, Fabio
    EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)
  • [23] EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
    Chen, Pin-Yu
    Sharma, Yash
    Zhang, Huan
    Yi, Jinfeng
    Hsieh, Cho-Jui
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 10 - 17
  • [24] WHEN CAUSAL INTERVENTION MEETS ADVERSARIAL EXAMPLES AND IMAGE MASKING FOR DEEP NEURAL NETWORKS
    Yang, Chao-Han Huck
    Liu, Yi-Chieh
    Chen, Pin-Yu
    Ma, Xiaoli
    Tsai, Yi-Chang James
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3811 - 3815
  • [25] Deep Networks with RBF Layers to Prevent Adversarial Examples
    Vidnerova, Petra
    Neruda, Roman
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2018, PT I, 2018, 10841 : 257 - 266
  • [26] Audio Adversarial Examples Generation with Recurrent Neural Networks
    Chang, Kuei-Huan
    Huang, Po-Hao
    Yu, Honggang
    Jin, Yier
    Wang, Ting-Chi
    2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 488 - 493
  • [27] Detecting Adversarial Examples for Deep Neural Networks via Layer Directed Discriminative Noise Injection
    Wang, Si
    Liu, Wenye
    Chang, Chip-Hong
    PROCEEDINGS OF THE 2019 ASIAN HARDWARE ORIENTED SECURITY AND TRUST SYMPOSIUM (ASIANHOST), 2019,
  • [28] Toward deep neural networks robust to adversarial examples, using augmented data importance perception
    Chen, Zhiming
    Xue, Wei
    Tian, Weiwei
    Wu, Yunhua
    Hua, Bing
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [29] On Assessing Vulnerabilities of the 5G Networks to Adversarial Examples
    Zolotukhin, Mikhail
    Miraghaei, Parsa
    Zhang, Di
    Hamalainen, Timo
    IEEE ACCESS, 2022, 10 : 126285 - 126303
  • [30] Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual information
    Zhang J.
    Qian W.
    Cao J.
    Xu D.
    Neural Computing and Applications, 2024, 36 (23) : 14379 - 14394