Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples

被引:6
|
作者
Sun, Guangling [1 ]
Su, Yuying [1 ]
Qin, Chuan [2 ]
Xu, Wenbo [1 ]
Lu, Xiaofeng [1 ]
Ceglowski, Andrzej [3 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 20044, Peoples R China
[2] Univ Shanghai Sci & Technol, Sch Opt Elect & Comp Engn, Shanghai 200093, Peoples R China
[3] Monash Univ, Dept Accounting, Melbourne, Vic 3145, Australia
基金
上海市自然科学基金;
关键词
41;
D O I
10.1155/2020/8319249
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigations have increasingly shown DNNs to be highly vulnerable when adversarial examples are used as input. Here, we present a comprehensive defense framework to protect DNNs against adversarial examples. First, we present statistical and minor alteration detectors to filter out adversarial examples contaminated by noticeable and unnoticeable perturbations, respectively. Then, we ensemble the detectors, a deep Residual Generative Network (ResGN), and an adversarially trained targeted network, to construct a complete defense framework. In this framework, the ResGN is our previously proposed network which is used to remove adversarial perturbations, and the adversarially trained targeted network is a network that is learned through adversarial training. Specifically, once the detectors determine an input example to be adversarial, it is cleaned by ResGN and then classified by the adversarially trained targeted network; otherwise, it is directly classified by this network. We empirically evaluate the proposed complete defense on ImageNet dataset. The results confirm the robustness against current representative attacking methods including fast gradient sign method, randomized fast gradient sign method, basic iterative method, universal adversarial perturbations, DeepFool method, and Carlini & Wagner method.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
    Xu, Weilin
    Evans, David
    Qi, Yanjun
    25TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2018), 2018,
  • [22] Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural Networks
    Chen, Zitao
    Dash, Pritam
    Pattabiraman, Karthik
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 689 - 703
  • [23] Salient feature extractor for adversarial defense on deep neural networks
    Chen, Ruoxi
    Chen, Jinyin
    Zheng, Haibin
    Xuan, Qi
    Ming, Zhaoyan
    Jiang, Wenrong
    Cui, Chen
    INFORMATION SCIENCES, 2022, 600 : 118 - 143
  • [24] Moving Target Defense for Embedded Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    PROCEEDINGS OF THE 17TH CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS (SENSYS '19), 2019, : 124 - 137
  • [25] DeepMTD: Moving Target Defense for Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2022, 18 (01)
  • [26] Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
    Mustafa, Aamir
    Khan, Salman
    Hayat, Munawar
    Goecke, Roland
    Shen, Jianbing
    Shao, Ling
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3384 - 3393
  • [27] DeepMTD: Moving Target Defense for Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    ACM Transactions on Sensor Networks, 2021, 18 (01)
  • [28] Detecting Adversarial Examples on Deep Neural Networks With Mutual Information Neural Estimation
    Gao, Song
    Wang, Ruxin
    Wang, Xiaoxuan
    Yu, Shui
    Dong, Yunyun
    Yao, Shaowen
    Zhou, Wei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 5168 - 5181
  • [29] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [30] Background Class Defense Against Adversarial Examples
    McCoyd, Michael
    Wagner, David
    2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 96 - 102