Improving the Robustness of Deep Neural Networks via Stability Training

被引:328
|
作者
Zheng, Stephan [1 ,2 ]
Song, Yang [1 ]
Leung, Thomas [1 ]
Goodfellow, Ian [1 ]
机构
[1] Google, Mountain View, CA 94043 USA
[2] CALTECH, Pasadena, CA 91125 USA
关键词
D O I
10.1109/CVPR.2016.485
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we address the issue of output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network. Such instability affects many deep architectures with state-of-the-art performance on a wide range of computer vision tasks. We present a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and cropping. We validate our method by stabilizing the state-of-the-art Inception architecture [11] against these types of distortions. In addition, we demonstrate that our stabilized model gives robust state-of-the-art performance on large-scale near-duplicate detection, similar-image ranking, and classification on noisy datasets.
引用
收藏
页码:4480 / 4488
页数:9
相关论文
共 50 条
  • [21] Improving Robustness of Deep Neural Networks for Aerial Navigation by Incorporating Input Uncertainty
    Arnez, Fabio
    Espinoza, Huascar
    Radermacher, Ansgar
    Terrier, Francois
    COMPUTER SAFETY, RELIABILITY, AND SECURITY (SAFECOMP 2021), 2021, 12853 : 219 - 225
  • [22] Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients
    Ros, Andrew Slavin
    Doshi-Velez, Finale
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1660 - 1669
  • [23] ε-Weakened Robustness of Deep Neural Networks
    Huang, Pei
    Yang, Yuting
    Liu, Minghao
    Jia, Fuqi
    Ma, Feifei
    Zhang, Jian
    PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022, 2022, : 126 - 138
  • [24] Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks
    Liu, Yongshuai
    Chen, Jiyu
    Chen, Hao
    DECISION AND GAME THEORY FOR SECURITY, GAMESEC 2018, 2018, 11199 : 102 - 114
  • [25] Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks
    Vandat, Arash
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [26] Improving the adversarial robustness of quantized neural networks via exploiting the feature diversity
    Chu, Tianshu
    Fang, Kun
    Yang, Jie
    Huang, Xiaolin
    PATTERN RECOGNITION LETTERS, 2023, 176 : 117 - 122
  • [27] A training strategy for improving the robustness of memristor-based binarized convolutional neural networks
    Huang, Lixing
    Yu, Hongqi
    Chen, Changlin
    Peng, Jie
    Diao, Jietao
    Nie, Hongshan
    Li, Zhiwei
    Liu, Haijun
    SEMICONDUCTOR SCIENCE AND TECHNOLOGY, 2022, 37 (01)
  • [28] TRAINING DEEP NEURAL NETWORKS VIA OPTIMIZATION OVER GRAPHS
    Zhang, Guoqiang
    Kleijn, W. Bastiaan
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 4119 - 4123
  • [29] Training Deep Neural Networks via Direct Loss Minimization
    Song, Yang
    Schwing, Alexander G.
    Zemel, Richard S.
    Urtasun, Raquel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [30] Training Quantized Deep Neural Networks via Cooperative Coevolution
    Peng, Fu
    Liu, Shengcai
    Lu, Ning
    Tang, Ke
    ADVANCES IN SWARM INTELLIGENCE, ICSI 2022, PT II, 2022, : 81 - 93