Improving the Robustness of Deep Neural Networks via Stability Training

被引:328
|
作者
Zheng, Stephan [1 ,2 ]
Song, Yang [1 ]
Leung, Thomas [1 ]
Goodfellow, Ian [1 ]
机构
[1] Google, Mountain View, CA 94043 USA
[2] CALTECH, Pasadena, CA 91125 USA
关键词
D O I
10.1109/CVPR.2016.485
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we address the issue of output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network. Such instability affects many deep architectures with state-of-the-art performance on a wide range of computer vision tasks. We present a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and cropping. We validate our method by stabilizing the state-of-the-art Inception architecture [11] against these types of distortions. In addition, we demonstrate that our stabilized model gives robust state-of-the-art performance on large-scale near-duplicate detection, similar-image ranking, and classification on noisy datasets.
引用
收藏
页码:4480 / 4488
页数:9
相关论文
共 50 条
  • [41] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [42] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
  • [43] Impact of Colour on Robustness of Deep Neural Networks
    De, Kanjar
    Pedersen, Marius
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 21 - 30
  • [44] Improve Robustness of Deep Neural Networks by Coding
    Huang, Kunping
    Raviv, Netanel
    Jain, Siddharth
    Upadhyaya, Pulakesh
    Bruck, Jehoshua
    Siegel, Paul H.
    Jiang, Anxiao
    2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [45] Analyzing the Noise Robustness of Deep Neural Networks
    Cao, Kelei
    Liu, Mengchen
    Su, Hang
    Wu, Jing
    Zhu, Jun
    Liu, Shixia
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2021, 27 (07) : 3289 - 3304
  • [46] Robustness of deep neural networks in adversarial examples
    Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [47] Improving Robustness Verification of Neural Networks with General Activation Functions via Branching and Optimization
    Luo, Zhengwu
    Wang, Lina
    Wang, Run
    Yang, Kang
    Ye, Aoshuang
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [48] Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference
    Fu, Yonggan
    Yu, Qixuan
    Li, Meng
    Chandra, Vikas
    Lin, Yingyan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [49] Towards robust neural networks via a global and monotonically decreasing robustness training strategy
    Liang, Zhen
    Wu, Taoran
    Liu, Wanwei
    Xue, Bai
    Yang, Wenjing
    Wang, Ji
    Pang, Zhengbin
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2023, 24 (10) : 1375 - 1389
  • [50] Improving the Robustness of Neural Networks Using K-Support Norm Based Adversarial Training
    Akhtar, Sheikh Waqas
    Rehman, Saad
    Akhtar, Mahmood
    Khan, Muazzam A.
    Riaz, Farhan
    Chaudry, Qaiser
    Young, Rupert
    IEEE ACCESS, 2016, 4 : 9501 - 9511