Natural Scene Statistics for Detecting Adversarial Examples in Deep Neural Networks

被引:6
|
作者
Kherchouche, Anouar [1 ,2 ,3 ]
Fezza, Sid Ahmed [2 ,3 ]
Hamidouche, Wassim [1 ]
Deforges, Olivier [1 ]
机构
[1] Univ Rennes, INSA Rennes, CNRS, IETR UMR 6164, Rennes, France
[2] Natl Inst Telecommun, Oran, Algeria
[3] ICT, Oran, Algeria
关键词
Adversarial examples; deep neural networks; detection; natural scene statistics;
D O I
10.1109/mmsp48831.2020.9287056
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The deep neural networks (DNNs) have been adopted in a wide spectrum of applications. However, it has been demonstrated that their are vulnerable to adversarial examples (AEs): carefully-crafted perturbations added to a clean input image. These AEs fool the DNNs which classify them incorrectly. Therefore, it is imperative to develop a detection method of AEs allowing the defense of DNNs. In this paper, we propose to characterize the adversarial perturbations through the use of natural scene statistics. We demonstrate that these statistical properties are altered by the presence of adversarial perturbations. Based on this finding, we design a classifier that exploits these scene statistics to determine if an input is adversarial or not. The proposed method has been evaluated against four prominent adversarial attacks and on three standards datasets. The experimental results have shown that the proposed detection method achieves a high detection accuracy, even against strong attacks, while providing a low false positive rate.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] DLR: Adversarial examples detection and label recovery for deep neural networks
    Han, Keji
    Ge, Yao
    Wang, Ruchuan
    Li, Yun
    PATTERN RECOGNITION LETTERS, 2025, 188 : 133 - 139
  • [22] Neuron Selecting: Defending Against Adversarial Examples in Deep Neural Networks
    Zhang, Ming
    Li, Hu
    Kuang, Xiaohui
    Pang, Ling
    Wu, Zhendong
    INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2019), 2020, 11999 : 613 - 629
  • [23] Creating Simple Adversarial Examples for Speech Recognition Deep Neural Networks
    Redden, Nathaniel
    Bernard, Ben
    Straub, Jeremy
    2019 IEEE 16TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SENSOR SYSTEMS WORKSHOPS (MASSW 2019), 2019, : 58 - 62
  • [24] Digital Watermark Perturbation for Adversarial Examples to Fool Deep Neural Networks
    Feng, Shiyu
    Feng, Feng
    Xu, Xiao
    Wang, Zheng
    Hu, Yining
    Xie, Lizhe
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [25] Detecting backdoor in deep neural networks via intentional adversarial perturbations
    Xue, Mingfu
    Wu, Yinghao
    Wu, Zhiyu
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    INFORMATION SCIENCES, 2023, 634 : 564 - 577
  • [26] GradFuzz: Fuzzing deep neural networks with gradient vector coverage for adversarial examples
    Park, Leo Hyun
    Chung, Soochang
    Kim, Jaeuk
    Kwon, Taekyoung
    NEUROCOMPUTING, 2023, 522 : 165 - 180
  • [27] Detecting Operational Adversarial Examples for Reliable Deep Learning
    Zhao, Xingyu
    Huang, Wei
    Schewe, Sven
    Dong, Yi
    Huang, Xiaowei
    51ST ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS - SUPPLEMENTAL VOL (DSN 2021), 2021, : 5 - 6
  • [28] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [29] Natural Scene Text Detection using Deep Neural Networks
    Mayank
    Bhowmick, Swapnamoy
    Kotecha, Disha
    Rege, Priti P.
    2021 6TH INTERNATIONAL CONFERENCE FOR CONVERGENCE IN TECHNOLOGY (I2CT), 2021,
  • [30] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    EURASIP Journal on Information Security, 2020