Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples

被引:5
|
作者
Sun, Guangling [1 ]
Su, Yuying [1 ]
Qin, Chuan [2 ]
Xu, Wenbo [1 ]
Lu, Xiaofeng [1 ]
Ceglowski, Andrzej [3 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 20044, Peoples R China
[2] Univ Shanghai Sci & Technol, Sch Opt Elect & Comp Engn, Shanghai 200093, Peoples R China
[3] Monash Univ, Dept Accounting, Melbourne, Vic 3145, Australia
基金
上海市自然科学基金;
关键词
41;
D O I
10.1155/2020/8319249
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigations have increasingly shown DNNs to be highly vulnerable when adversarial examples are used as input. Here, we present a comprehensive defense framework to protect DNNs against adversarial examples. First, we present statistical and minor alteration detectors to filter out adversarial examples contaminated by noticeable and unnoticeable perturbations, respectively. Then, we ensemble the detectors, a deep Residual Generative Network (ResGN), and an adversarially trained targeted network, to construct a complete defense framework. In this framework, the ResGN is our previously proposed network which is used to remove adversarial perturbations, and the adversarially trained targeted network is a network that is learned through adversarial training. Specifically, once the detectors determine an input example to be adversarial, it is cleaned by ResGN and then classified by the adversarially trained targeted network; otherwise, it is directly classified by this network. We empirically evaluate the proposed complete defense on ImageNet dataset. The results confirm the robustness against current representative attacking methods including fast gradient sign method, randomized fast gradient sign method, basic iterative method, universal adversarial perturbations, DeepFool method, and Carlini & Wagner method.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Salient feature extractor for adversarial defense on deep neural networks
    Chen, Ruoxi
    Chen, Jinyin
    Zheng, Haibin
    Xuan, Qi
    Ming, Zhaoyan
    Jiang, Wenrong
    Cui, Chen
    [J]. INFORMATION SCIENCES, 2022, 600 : 118 - 143
  • [22] Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
    Mustafa, Aamir
    Khan, Salman
    Hayat, Munawar
    Goecke, Roland
    Shen, Jianbing
    Shao, Ling
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3384 - 3393
  • [23] DeepMTD: Moving Target Defense for Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    [J]. ACM TRANSACTIONS ON SENSOR NETWORKS, 2022, 18 (01)
  • [24] Moving Target Defense for Embedded Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    [J]. PROCEEDINGS OF THE 17TH CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS (SENSYS '19), 2019, : 124 - 137
  • [25] Detecting Adversarial Examples on Deep Neural Networks With Mutual Information Neural Estimation
    Gao, Song
    Wang, Ruxin
    Wang, Xiaoxuan
    Yu, Shui
    Dong, Yunyun
    Yao, Shaowen
    Zhou, Wei
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 5168 - 5181
  • [26] DeepMTD: Moving Target Defense for Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    [J]. ACM Transactions on Sensor Networks, 2021, 18 (01)
  • [27] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    [J]. IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [28] Background Class Defense Against Adversarial Examples
    McCoyd, Michael
    Wagner, David
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 96 - 102
  • [29] MoNet: Impressionism As A Defense Against Adversarial Examples
    Ge, Huangyi
    Chau, Sze Yiu
    Li, Ninghui
    [J]. 2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 246 - 255
  • [30] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    [J]. SYMMETRY-BASEL, 2021, 13 (03):