HYBRID DEFENSE FOR DEEP NEURAL NETWORKS: AN INTEGRATION OF DETECTING AND CLEANING ADVERSARIAL PERTURBATIONS

被引:3
|
作者
Fan, Weiqi [1 ]
Sun, Guangling [1 ]
Su, Yuying [1 ]
Liu, Zhi [1 ]
Lu, Xiaofeng [1 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial perturbations; Hybrid defense; Deep neural network; Computer vision;
D O I
10.1109/ICMEW.2019.00-85
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, deep neural networks (DNN) have achieved significant success in computer vision. However, recent investigations have shown that DNN models are highly vulnerable to an input adversarial example. How to defense against adversarial examples is an essential issue to improve the robustness of DNN models. In this paper, we present a hybrid defense framework that integrates detecting and cleaning adversarial perturbations to protect DNN. Specifically, the detecting part consists of statistical detector and Gaussian noise injection detector which are adaptive to perturbation characteristics to inspect adversarial examples, and the cleaning part is a deep residual generative network (ResGN) for removing or mitigating the adversarial perturbations. The parameters of ResGN are optimized by minimizing a joint loss including a pixel loss, a texture loss and a task loss. In the experiments, we evaluate our approach on ImageNet and the comprehensive results validate its robustness against current representative attacks.
引用
收藏
页码:210 / 215
页数:6
相关论文
共 50 条
  • [1] Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
    Papernot, Nicolas
    McDaniel, Patrick
    Wu, Xi
    Jha, Somesh
    Swami, Ananthram
    2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, : 582 - 597
  • [2] Detecting backdoor in deep neural networks via intentional adversarial perturbations
    Xue, Mingfu
    Wu, Yinghao
    Wu, Zhiyu
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    INFORMATION SCIENCES, 2023, 634 : 564 - 577
  • [3] Adversarial Perturbation Defense on Deep Neural Networks
    Zhang, Xingwei
    Zheng, Xiaolong
    Mao, Wenji
    ACM COMPUTING SURVEYS, 2021, 54 (08)
  • [4] Generalizing universal adversarial perturbations for deep neural networks
    Yanghao Zhang
    Wenjie Ruan
    Fu Wang
    Xiaowei Huang
    Machine Learning, 2023, 112 : 1597 - 1626
  • [5] Luring Transferable Adversarial Perturbations for Deep Neural Networks
    Bernhard, Remi
    Moellic, Pierre-Alain
    Dutertre, Jean-Max
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [6] Generalizing universal adversarial perturbations for deep neural networks
    Zhang, Yanghao
    Ruan, Wenjie
    Wang, Fu
    Huang, Xiaowei
    MACHINE LEARNING, 2023, 112 (05) : 1597 - 1626
  • [7] Detecting adversarial example attacks to deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Fumarola, Roberta
    Becarelli, Rudy
    PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,
  • [8] Impact of reverberation through deep neural networks on adversarial perturbations
    Cohendet, Romain
    Solinas, Miguel
    Bernhard, Remi
    Reyboz, Marina
    Moellic, Pierre-Alain
    Bourrier, Yannick
    Mermillod, Martial
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 840 - 846
  • [9] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [10] Fast Training of Deep Neural Networks Robust to Adversarial Perturbations
    Goodwin, Justin
    Brown, Olivia
    Helus, Victoria
    2020 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2020,