Natural Scene Statistics for Detecting Adversarial Examples in Deep Neural Networks

被引:5
|
作者
Kherchouche, Anouar [1 ,2 ,3 ]
Fezza, Sid Ahmed [2 ,3 ]
Hamidouche, Wassim [1 ]
Deforges, Olivier [1 ]
机构
[1] Univ Rennes, INSA Rennes, CNRS, IETR UMR 6164, Rennes, France
[2] Natl Inst Telecommun, Oran, Algeria
[3] ICT, Oran, Algeria
关键词
Adversarial examples; deep neural networks; detection; natural scene statistics;
D O I
10.1109/mmsp48831.2020.9287056
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The deep neural networks (DNNs) have been adopted in a wide spectrum of applications. However, it has been demonstrated that their are vulnerable to adversarial examples (AEs): carefully-crafted perturbations added to a clean input image. These AEs fool the DNNs which classify them incorrectly. Therefore, it is imperative to develop a detection method of AEs allowing the defense of DNNs. In this paper, we propose to characterize the adversarial perturbations through the use of natural scene statistics. We demonstrate that these statistical properties are altered by the presence of adversarial perturbations. Based on this finding, we design a classifier that exploits these scene statistics to determine if an input is adversarial or not. The proposed method has been evaluated against four prominent adversarial attacks and on three standards datasets. The experimental results have shown that the proposed detection method achieves a high detection accuracy, even against strong attacks, while providing a low false positive rate.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Detection of Adversarial Examples in Deep Neural Networks with Natural Scene Statistics
    Kherchouche, Anouar
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforge, Olivier
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [2] Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
    Xu, Weilin
    Evans, David
    Qi, Yanjun
    25TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2018), 2018,
  • [3] Detecting Adversarial Examples on Deep Neural Networks With Mutual Information Neural Estimation
    Gao, Song
    Wang, Ruxin
    Wang, Xiaoxuan
    Yu, Shui
    Dong, Yunyun
    Yao, Shaowen
    Zhou, Wei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 5168 - 5181
  • [4] Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters
    Gu, Shuangchi
    Yi, Ping
    Zhu, Ting
    Yao, Yao
    Wang, Wei
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 164 - 173
  • [5] Detecting adversarial examples via prediction difference for deep neural networks
    Guo, Feng
    Zhao, Qingjie
    Li, Xuan
    Kuang, Xiaohui
    Zhang, Jianwei
    Han, Yahong
    Tan, Yu-an
    INFORMATION SCIENCES, 2019, 501 : 182 - 192
  • [6] Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
    Liang, Bin
    Li, Hongcheng
    Su, Miaoqiang
    Li, Xirong
    Shi, Wenchang
    Wang, Xiaofeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (01) : 72 - 85
  • [7] Robustness of deep neural networks in adversarial examples
    Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [8] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [9] Detect and defense against adversarial examples in deep learning using natural scene statistics and adaptive denoising
    Anouar Kherchouche
    Sid Ahmed Fezza
    Wassim Hamidouche
    Neural Computing and Applications, 2022, 34 : 21567 - 21582
  • [10] Detect and defense against adversarial examples in deep learning using natural scene statistics and adaptive denoising
    Kherchouche, Anouar
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (24): : 21567 - 21582