HYBRID DEFENSE FOR DEEP NEURAL NETWORKS: AN INTEGRATION OF DETECTING AND CLEANING ADVERSARIAL PERTURBATIONS

被引:3
|
作者
Fan, Weiqi [1 ]
Sun, Guangling [1 ]
Su, Yuying [1 ]
Liu, Zhi [1 ]
Lu, Xiaofeng [1 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial perturbations; Hybrid defense; Deep neural network; Computer vision;
D O I
10.1109/ICMEW.2019.00-85
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, deep neural networks (DNN) have achieved significant success in computer vision. However, recent investigations have shown that DNN models are highly vulnerable to an input adversarial example. How to defense against adversarial examples is an essential issue to improve the robustness of DNN models. In this paper, we present a hybrid defense framework that integrates detecting and cleaning adversarial perturbations to protect DNN. Specifically, the detecting part consists of statistical detector and Gaussian noise injection detector which are adaptive to perturbation characteristics to inspect adversarial examples, and the cleaning part is a deep residual generative network (ResGN) for removing or mitigating the adversarial perturbations. The parameters of ResGN are optimized by minimizing a joint loss including a pixel loss, a texture loss and a task loss. In the experiments, we evaluate our approach on ImageNet and the comprehensive results validate its robustness against current representative attacks.
引用
收藏
页码:210 / 215
页数:6
相关论文
共 50 条
  • [21] Comparing Speed Reduction of Adversarial Defense Systems on Deep Neural Networks
    Bowman, Andrew
    Yang, Xin
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST), 2021,
  • [22] Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks
    Liu, Qi
    Liu, Tao
    Wen, Wujie
    CYBER SENSING 2018, 2018, 10630
  • [23] Detecting adversarial examples via prediction difference for deep neural networks
    Guo, Feng
    Zhao, Qingjie
    Li, Xuan
    Kuang, Xiaohui
    Zhang, Jianwei
    Han, Yahong
    Tan, Yu-an
    INFORMATION SCIENCES, 2019, 501 : 182 - 192
  • [24] Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters
    Gu, Shuangchi
    Yi, Ping
    Zhu, Ting
    Yao, Yao
    Wang, Wei
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 164 - 173
  • [25] Natural Scene Statistics for Detecting Adversarial Examples in Deep Neural Networks
    Kherchouche, Anouar
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforges, Olivier
    2020 IEEE 22ND INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2020,
  • [26] Stability Analysis of Deep Neural Networks under Adversarial Attacks and Noise Perturbations
    Eslami, Parisa
    Song, Houbing
    20TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC 2024, 2024, : 150 - 155
  • [27] Cassandra: Detecting Trojaned Networks From Adversarial Perturbations
    Zhang, Xiaoyu
    Gupta, Rohit
    Mian, Ajmal
    Rahnavard, Nazanin
    Shah, Mubarak
    IEEE ACCESS, 2021, 9 (09): : 135856 - 135867
  • [28] EFFICIENT RANDOMIZED DEFENSE AGAINST ADVERSARIAL ATTACKS IN DEEP CONVOLUTIONAL NEURAL NETWORKS
    Sheikholeslami, Fatemeh
    Jain, Swayambhoo
    Giannakis, Georgios B.
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3277 - 3281
  • [29] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [30] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020