Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples

被引:6
|
作者
Sun, Guangling [1 ]
Su, Yuying [1 ]
Qin, Chuan [2 ]
Xu, Wenbo [1 ]
Lu, Xiaofeng [1 ]
Ceglowski, Andrzej [3 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 20044, Peoples R China
[2] Univ Shanghai Sci & Technol, Sch Opt Elect & Comp Engn, Shanghai 200093, Peoples R China
[3] Monash Univ, Dept Accounting, Melbourne, Vic 3145, Australia
基金
上海市自然科学基金;
关键词
41;
D O I
10.1155/2020/8319249
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigations have increasingly shown DNNs to be highly vulnerable when adversarial examples are used as input. Here, we present a comprehensive defense framework to protect DNNs against adversarial examples. First, we present statistical and minor alteration detectors to filter out adversarial examples contaminated by noticeable and unnoticeable perturbations, respectively. Then, we ensemble the detectors, a deep Residual Generative Network (ResGN), and an adversarially trained targeted network, to construct a complete defense framework. In this framework, the ResGN is our previously proposed network which is used to remove adversarial perturbations, and the adversarially trained targeted network is a network that is learned through adversarial training. Specifically, once the detectors determine an input example to be adversarial, it is cleaned by ResGN and then classified by the adversarially trained targeted network; otherwise, it is directly classified by this network. We empirically evaluate the proposed complete defense on ImageNet dataset. The results confirm the robustness against current representative attacking methods including fast gradient sign method, randomized fast gradient sign method, basic iterative method, universal adversarial perturbations, DeepFool method, and Carlini & Wagner method.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Digital Watermark Perturbation for Adversarial Examples to Fool Deep Neural Networks
    Feng, Shiyu
    Feng, Feng
    Xu, Xiao
    Wang, Zheng
    Hu, Yining
    Xie, Lizhe
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [42] Retrieval-Augmented Convolutional Neural Networks against Adversarial Examples
    Zhao , Jake
    Cho, Kyunghyun
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11555 - 11563
  • [43] Advances in Brain-Inspired Deep Neural Networks for Adversarial Defense
    Li, Ruyi
    Ke, Ming
    Dong, Zhanguo
    Wang, Lubin
    Zhang, Tielin
    Du, Minghua
    Wang, Gang
    ELECTRONICS, 2024, 13 (13)
  • [44] QNAD: Quantum Noise Injection for Adversarial Defense in Deep Neural Networks
    Kundu, Shamik
    Choudhury, Navnil
    Das, Sanjay
    Raha, Arnab
    Basu, Kanad
    2024 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST, HOST, 2024, : 1 - 11
  • [45] Comparing Speed Reduction of Adversarial Defense Systems on Deep Neural Networks
    Bowman, Andrew
    Yang, Xin
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST), 2021,
  • [46] Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks
    Liu, Qi
    Liu, Tao
    Wen, Wujie
    CYBER SENSING 2018, 2018, 10630
  • [47] An active learning framework for adversarial training of deep neural networks
    Susmita Ghosh
    Abhiroop Chatterjee
    Lance Fiondella
    Neural Computing and Applications, 2025, 37 (9) : 6849 - 6876
  • [48] DeepShuffle: A Lightweight Defense Framework against Adversarial Fault Injection Attacks on Deep Neural Networks in Multi-Tenant Cloud-FPGA
    Luo, Yukui
    Rakin, Adnan Siraj
    Fan, Deliang
    Xu, Xiaolin
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 3293 - 3310
  • [49] Advocating for Multiple Defense Strategies Against Adversarial Examples
    Araujo, Alexandre
    Meunier, Laurent
    Pinot, Rafael
    Negrevergne, Benjamin
    ECML PKDD 2020 WORKSHOPS, 2020, 1323 : 165 - 177
  • [50] On the Defense Against Adversarial Examples Beyond the Visible Spectrum
    Ortiz, Anthony
    Fuentes, Olac
    Rosario, Dalton
    Kiekintveld, Christopher
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 553 - 558