Towards Robustness of Deep Neural Networks via Regularization

被引:7
|
作者
Li, Yao [1 ]
Min, Martin Renqiang [2 ]
Lee, Thomas [3 ]
Yu, Wenchao [2 ]
Kruus, Erik [2 ]
Wang, Wei [4 ]
Hsieh, Cho-Jui [4 ]
机构
[1] Univ N Carolina, Chapel Hill, NC 27515 USA
[2] NEC Labs Amer, Princeton, NJ USA
[3] Univ Calif Davis, Davis, CA 95616 USA
[4] Univ Calif Los Angeles, Los Angeles, CA USA
关键词
D O I
10.1109/ICCV48922.2021.00740
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies have demonstrated the vulnerability of deep neural networks against adversarial examples. Inspired by the observation that adversarial examples often lie outside the natural image data manifold and the intrinsic dimension of image data is much smaller than its pixel space dimension, we propose to embed high-dimensional input images into a low-dimensional space and apply regularization on the embedding space to push the adversarial examples back to the manifold. The proposed framework is called Embedding Regularized Classifier (ER-Classifier), which improves the adversarial robustness of the classifier through embedding regularization. Besides improving classification accuracy against adversarial examples, the framework can be combined with detection methods to detect adversarial examples. Experimental results on several benchmark datasets show that, our proposed framework achieves good performance against strong adversarial attack methods.
引用
收藏
页码:7476 / 7485
页数:10
相关论文
共 50 条
  • [1] Towards Stochasticity of Regularization in Deep Neural Networks
    Sandjakoska, Ljubinka
    Bogdanova, Ana Madevska
    [J]. 2018 14TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2018,
  • [2] Towards Proving the Adversarial Robustness of Deep Neural Networks
    Katz, Guy
    Barrett, Clark
    Dill, David L.
    Julian, Kyle
    Kochenderfer, Mykel J.
    [J]. ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2017, (257): : 19 - 26
  • [3] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [4] Deep Neural Networks Pruning via the Structured Perspective Regularization
    Cacciola, Matteo
    Frangioni, Antonio
    Li, Xinlin
    Lodi, Andrea
    [J]. SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2023, 5 (04): : 1051 - 1077
  • [5] GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization
    Lee, Sungyoon
    Kim, Hoki
    Lee, Jaewook
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) : 2645 - 2651
  • [6] Improving the Robustness of Deep Neural Networks via Stability Training
    Zheng, Stephan
    Song, Yang
    Leung, Thomas
    Goodfellow, Ian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 4480 - 4488
  • [7] A regularization perspective based theoretical analysis for adversarial robustness of deep spiking neural networks
    Zhang, Hui
    Cheng, Jian
    Zhang, Jun
    Liu, Hongyi
    Wei, Zhihui
    [J]. NEURAL NETWORKS, 2023, 165 : 164 - 174
  • [8] CSTAR: Towards Compact and Structured Deep Neural Networks with Adversarial Robustness
    Phan, Huy
    Yin, Miao
    Sui, Yang
    Yuan, Bo
    Zonouz, Saman
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 2065 - 2073
  • [9] Learning regularization parameters of inverse problems via deep neural networks
    Afkham, Babak Maboudi
    Chung, Julianne
    Chung, Matthias
    [J]. INVERSE PROBLEMS, 2021, 37 (10)
  • [10] Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization
    Li, Shaofeng
    Xue, Minhui
    Zhao, Benjamin
    Zhu, Haojin
    Zhang, Xinpeng
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (05) : 2088 - 2105