Deep neural rejection against adversarial examples

被引:34
|
作者
Sotgiu, Angelo [1 ]
Demontis, Ambra [1 ]
Melis, Marco [1 ]
Biggio, Battista [1 ,2 ]
Fumera, Giorgio [1 ]
Feng, Xiaoyi [3 ]
Roli, Fabio [1 ,2 ]
机构
[1] Univ Cagliari, DIEE, Piazza Armi, I-09123 Cagliari, Italy
[2] Pluribus One, Cagliari, Italy
[3] Northwestern Polytech Univ, Xian, Peoples R China
基金
欧盟地平线“2020”;
关键词
Adversarial machine learning; Deep neural networks; Adversarial examples;
D O I
10.1186/s13635-020-00105-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. In this work, we propose a deep neural rejection mechanism to detect adversarial examples, based on the idea of rejecting samples that exhibit anomalous feature representations at different network layers. With respect to competing approaches, our method does not require generating adversarial examples at training time, and it is less computationally demanding. To properly evaluate our method, we define an adaptive white-box attack that is aware of the defense mechanism and aims to bypass it. Under this worst-case setting, we empirically show that our approach outperforms previously proposed methods that detect adversarial examples by only analyzing the feature representation provided by the output network layer.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    [J]. EURASIP Journal on Information Security, 2020
  • [2] Adversarial Examples Against Deep Neural Network based Steganalysis
    Zhang, Yiwei
    Zhang, Weiming
    Chen, Kejiang
    Liu, Jiayang
    Liu, Yujia
    Yu, Nenghai
    [J]. PROCEEDINGS OF THE 6TH ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY (IH&MMSEC'18), 2018, : 67 - 72
  • [3] Neuron Selecting: Defending Against Adversarial Examples in Deep Neural Networks
    Zhang, Ming
    Li, Hu
    Kuang, Xiaohui
    Pang, Ling
    Wu, Zhendong
    [J]. INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2019), 2020, 11999 : 613 - 629
  • [4] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    [J]. IEEE ACCESS, 2022, 10 : 33602 - 33615
  • [5] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    [J]. IEEE Access, 2022, 10 : 33602 - 33615
  • [6] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    [J]. MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [7] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    [J]. INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [8] Compound adversarial examples in deep neural networks q
    Li, Yanchun
    Li, Zhetao
    Zeng, Li
    Long, Saiqin
    Huang, Feiran
    Ren, Kui
    [J]. INFORMATION SCIENCES, 2022, 613 : 50 - 68
  • [9] Assessing Threat of Adversarial Examples on Deep Neural Networks
    Graese, Abigail
    Rozsa, Andras
    Boult, Terrance E.
    [J]. 2016 15TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2016), 2016, : 69 - 74
  • [10] Targeted Adversarial Examples Against RF Deep Classifiers
    Kokalj-Filipovic, Silvija
    Miller, Rob
    Morman, Joshua
    [J]. PROCEEDINGS OF THE 2019 ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING (WISEML '19), 2019, : 6 - 11