Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters

被引:8
|
作者
Gu, Shuangchi [1 ]
Yi, Ping [1 ]
Zhu, Ting [2 ]
Yao, Yao [2 ]
Wang, Wei [2 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Cyber Secur, 800 Dongchuan Rd, Shanghai, Peoples R China
[2] Univ Maryland Baltimore Cty, Dept Comp Sci & Elect Engn, Baltimore, MD 21228 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Normalizing Filter; Adversarial Example; Detection Framework;
D O I
10.5220/0007370301640173
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks are vulnerable to adversarial examples which are inputs modified with unnoticeable but malicious perturbations. Most defending methods only focus on tuning the DNN itself, but we propose a novel defending method which modifies the input data to detect the adversarial examples. We establish a detection framework based on normalizing filters that can partially erase those perturbations by smoothing the input image or depth reduction work. The framework gives the decision by comparing the classification results of original input and multiple normalized inputs. Using several combinations of gaussian blur filter, median blur filter and depth reduction filter, the evaluation results reaches a high detection rate and achieves partial restoration work of adversarial examples in MNIST dataset. The whole detection framework is a low-cost highly extensible strategy in DNN defending works.
引用
收藏
页码:164 / 173
页数:10
相关论文
共 50 条
  • [1] Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
    Xu, Weilin
    Evans, David
    Qi, Yanjun
    [J]. 25TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2018), 2018,
  • [2] Detecting Adversarial Examples on Deep Neural Networks With Mutual Information Neural Estimation
    Gao, Song
    Wang, Ruxin
    Wang, Xiaoxuan
    Yu, Shui
    Dong, Yunyun
    Yao, Shaowen
    Zhou, Wei
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 5168 - 5181
  • [3] Detecting adversarial examples via prediction difference for deep neural networks
    Guo, Feng
    Zhao, Qingjie
    Li, Xuan
    Kuang, Xiaohui
    Zhang, Jianwei
    Han, Yahong
    Tan, Yu-an
    [J]. INFORMATION SCIENCES, 2019, 501 : 182 - 192
  • [4] Natural Scene Statistics for Detecting Adversarial Examples in Deep Neural Networks
    Kherchouche, Anouar
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforges, Olivier
    [J]. 2020 IEEE 22ND INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2020,
  • [5] Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
    Liang, Bin
    Li, Hongcheng
    Su, Miaoqiang
    Li, Xirong
    Shi, Wenchang
    Wang, Xiaofeng
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (01) : 72 - 85
  • [6] Robustness of deep neural networks in adversarial examples
    [J]. Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [7] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    [J]. INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [8] Interpretability Analysis of Deep Neural Networks With Adversarial Examples
    Dong, Yin-Peng
    Su, Hang
    Zhu, Jun
    [J]. Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (01): : 75 - 86
  • [9] Compound adversarial examples in deep neural networks q
    Li, Yanchun
    Li, Zhetao
    Zeng, Li
    Long, Saiqin
    Huang, Feiran
    Ren, Kui
    [J]. INFORMATION SCIENCES, 2022, 613 : 50 - 68
  • [10] Assessing Threat of Adversarial Examples on Deep Neural Networks
    Graese, Abigail
    Rozsa, Andras
    Boult, Terrance E.
    [J]. 2016 15TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2016), 2016, : 69 - 74