Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters

被引:8
|
作者
Gu, Shuangchi [1 ]
Yi, Ping [1 ]
Zhu, Ting [2 ]
Yao, Yao [2 ]
Wang, Wei [2 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Cyber Secur, 800 Dongchuan Rd, Shanghai, Peoples R China
[2] Univ Maryland Baltimore Cty, Dept Comp Sci & Elect Engn, Baltimore, MD 21228 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Normalizing Filter; Adversarial Example; Detection Framework;
D O I
10.5220/0007370301640173
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks are vulnerable to adversarial examples which are inputs modified with unnoticeable but malicious perturbations. Most defending methods only focus on tuning the DNN itself, but we propose a novel defending method which modifies the input data to detect the adversarial examples. We establish a detection framework based on normalizing filters that can partially erase those perturbations by smoothing the input image or depth reduction work. The framework gives the decision by comparing the classification results of original input and multiple normalized inputs. Using several combinations of gaussian blur filter, median blur filter and depth reduction filter, the evaluation results reaches a high detection rate and achieves partial restoration work of adversarial examples in MNIST dataset. The whole detection framework is a low-cost highly extensible strategy in DNN defending works.
引用
收藏
页码:164 / 173
页数:10
相关论文
共 50 条
  • [31] EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
    Chen, Pin-Yu
    Sharma, Yash
    Zhang, Huan
    Yi, Jinfeng
    Hsieh, Cho-Jui
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 10 - 17
  • [32] AdverseGen: A Practical Tool for Generating Adversarial Examples to Deep Neural Networks Using Black-Box Approaches
    Zhang, Keyuan
    Wu, Kaiyue
    Chen, Siyu
    Zhao, Yunce
    Yao, Xin
    [J]. ARTIFICIAL INTELLIGENCE XXXVIII, 2021, 13101 : 313 - 326
  • [33] WHEN CAUSAL INTERVENTION MEETS ADVERSARIAL EXAMPLES AND IMAGE MASKING FOR DEEP NEURAL NETWORKS
    Yang, Chao-Han Huck
    Liu, Yi-Chieh
    Chen, Pin-Yu
    Ma, Xiaoli
    Tsai, Yi-Chang James
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3811 - 3815
  • [34] Detecting LWIR Filters using Hyperspectral Camera and Neural Networks
    Cech, Jiri
    Rozkovec, Martin
    [J]. 2017 INTERNATIONAL CONFERENCE ON APPLIED ELECTRONICS (AE), 2017, : 35 - 38
  • [35] Reinforced Adversarial Attacks on Deep Neural Networks Using ADMM
    Zhao, Pu
    Xu, Kaidi
    Zhang, Tianyun
    Fardad, Makan
    Wang, Yanzhi
    Lin, Xue
    [J]. 2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 1169 - 1173
  • [36] Deep Networks with RBF Layers to Prevent Adversarial Examples
    Vidnerova, Petra
    Neruda, Roman
    [J]. ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2018, PT I, 2018, 10841 : 257 - 266
  • [37] Audio Adversarial Examples Generation with Recurrent Neural Networks
    Chang, Kuei-Huan
    Huang, Po-Hao
    Yu, Honggang
    Jin, Yier
    Wang, Ting-Chi
    [J]. 2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 488 - 493
  • [38] Detecting adversarial examples using image reconstruction differences
    Sun, Jiaze
    Yi, Meng
    [J]. SOFT COMPUTING, 2023, 27 (12) : 7863 - 7877
  • [39] Detecting adversarial examples using image reconstruction differences
    Jiaze Sun
    Meng Yi
    [J]. Soft Computing, 2023, 27 : 7863 - 7877
  • [40] Detecting chaos in adversarial examples
    Deniz, Oscar
    Pedraza, Anibal
    Bueno, Gloria
    [J]. CHAOS SOLITONS & FRACTALS, 2022, 163