Natural Scene Statistics for Detecting Adversarial Examples in Deep Neural Networks

被引:6
|
作者
Kherchouche, Anouar [1 ,2 ,3 ]
Fezza, Sid Ahmed [2 ,3 ]
Hamidouche, Wassim [1 ]
Deforges, Olivier [1 ]
机构
[1] Univ Rennes, INSA Rennes, CNRS, IETR UMR 6164, Rennes, France
[2] Natl Inst Telecommun, Oran, Algeria
[3] ICT, Oran, Algeria
关键词
Adversarial examples; deep neural networks; detection; natural scene statistics;
D O I
10.1109/mmsp48831.2020.9287056
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The deep neural networks (DNNs) have been adopted in a wide spectrum of applications. However, it has been demonstrated that their are vulnerable to adversarial examples (AEs): carefully-crafted perturbations added to a clean input image. These AEs fool the DNNs which classify them incorrectly. Therefore, it is imperative to develop a detection method of AEs allowing the defense of DNNs. In this paper, we propose to characterize the adversarial perturbations through the use of natural scene statistics. We demonstrate that these statistical properties are altered by the presence of adversarial perturbations. Based on this finding, we design a classifier that exploits these scene statistics to determine if an input is adversarial or not. The proposed method has been evaluated against four prominent adversarial attacks and on three standards datasets. The experimental results have shown that the proposed detection method achieves a high detection accuracy, even against strong attacks, while providing a low false positive rate.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Deep neural rejection against adversarial examples
    Sotgiu, Angelo
    Demontis, Ambra
    Melis, Marco
    Biggio, Battista
    Fumera, Giorgio
    Feng, Xiaoyi
    Roli, Fabio
    EURASIP JOURNAL ON INFORMATION SECURITY, 2020, 2020 (01)
  • [32] HYBRID DEFENSE FOR DEEP NEURAL NETWORKS: AN INTEGRATION OF DETECTING AND CLEANING ADVERSARIAL PERTURBATIONS
    Fan, Weiqi
    Sun, Guangling
    Su, Yuying
    Liu, Zhi
    Lu, Xiaofeng
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 210 - 215
  • [33] WHEN CAUSAL INTERVENTION MEETS ADVERSARIAL EXAMPLES AND IMAGE MASKING FOR DEEP NEURAL NETWORKS
    Yang, Chao-Han Huck
    Liu, Yi-Chieh
    Chen, Pin-Yu
    Ma, Xiaoli
    Tsai, Yi-Chang James
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3811 - 3815
  • [34] EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
    Chen, Pin-Yu
    Sharma, Yash
    Zhang, Huan
    Yi, Jinfeng
    Hsieh, Cho-Jui
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 10 - 17
  • [35] Reading Text in Natural Scene Images via Deep Neural Networks
    Zhao, Haifeng
    Hu, Yong
    Zhang, Jinxia
    PROCEEDINGS 2017 4TH IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR), 2017, : 43 - 48
  • [36] Deep Networks with RBF Layers to Prevent Adversarial Examples
    Vidnerova, Petra
    Neruda, Roman
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2018, PT I, 2018, 10841 : 257 - 266
  • [37] Audio Adversarial Examples Generation with Recurrent Neural Networks
    Chang, Kuei-Huan
    Huang, Po-Hao
    Yu, Honggang
    Jin, Yier
    Wang, Ting-Chi
    2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 488 - 493
  • [38] Detecting chaos in adversarial examples
    Deniz, Oscar
    Pedraza, Anibal
    Bueno, Gloria
    CHAOS SOLITONS & FRACTALS, 2022, 163
  • [39] Detecting spread spectrum watermarks using natural scene statistics
    Seshadrinathan, K
    Sheikh, HR
    Bovik, AC
    2005 International Conference on Image Processing (ICIP), Vols 1-5, 2005, : 1681 - 1684
  • [40] Toward deep neural networks robust to adversarial examples, using augmented data importance perception
    Chen, Zhiming
    Xue, Wei
    Tian, Weiwei
    Wu, Yunhua
    Hua, Bing
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)