An Empirical Study on the Effect of Training Data Perturbations on Neural Network Robustness

被引:0
|
作者
Wang, Jie [1 ]
Wu, Zili [2 ]
Lu, Minyan [1 ]
Ai, Jun [1 ]
机构
[1] Beihang Univ, Sch Reliabil & Syst Engn, Key Lab Reliabil & Environm Engn Technol, Beijing 100191, Peoples R China
[2] CRRC Zhuzhou Inst Co Ltd, Zhuzhou 412001, Peoples R China
关键词
robustness; perturbation; adversarial training; convolutional neural network; empirical study;
D O I
10.3390/s24154874
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The vulnerability of modern neural networks to random noise and deliberate attacks has raised concerns about their robustness, particularly as they are increasingly utilized in safety- and security-critical applications. Although recent research efforts were made to enhance robustness through retraining with adversarial examples or employing data augmentation techniques, a comprehensive investigation into the effects of training data perturbations on model robustness remains lacking. This paper presents the first extensive empirical study investigating the influence of data perturbations during model retraining. The experimental analysis focuses on both random and adversarial robustness, following established practices in the field of robustness analysis. Various types of perturbations in different aspects of the dataset are explored, including input, label, and sampling distribution. Single-factor and multi-factor experiments are conducted to assess individual perturbations and their combinations. The findings provide insights into constructing high-quality training datasets for optimizing robustness and recommend the appropriate degree of training set perturbations that balance robustness and correctness, and contribute to understanding model robustness in deep learning and offer practical guidance for enhancing model performance through perturbed retraining, promoting the development of more reliable and trustworthy deep learning systems for safety-critical applications.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] Guiding the Comparison of Neural Network Local Robustness: An Empirical Study
    Bu, Hao
    Sun, Meng
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 312 - 323
  • [2] Towards the robustness in neural network training
    Manic, M
    Wilamowski, B
    IECON-2002: PROCEEDINGS OF THE 2002 28TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-4, 2002, : 1768 - 1771
  • [3] Effect of data standardization on neural network training
    Shanker, M
    Hu, MY
    Hung, MS
    OMEGA-INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE, 1996, 24 (04): : 385 - 397
  • [4] Neural Network Spectral Robustness under Perturbations of the Underlying Graph
    Radulescu, Anca
    NEURAL COMPUTATION, 2016, 28 (01) : 1 - 44
  • [5] Investigating the Robustness of Sequential Recommender Systems Against Training Data Perturbations
    Betello, Filippo
    Siciliano, Federico
    Mishra, Pushkar
    Silvestri, Fabrizio
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT II, 2024, 14609 : 205 - 220
  • [6] Neural Network Training Techniques Regularize Optimization Trajectory: An Empirical Study
    Chen, Cheng
    Yang, Junjie
    Zhou, Yi
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 141 - 146
  • [7] Adversarial Training and Robustness for Multiple Perturbations
    Tramer, Florian
    Boneh, Dan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [8] Network Robustness Prediction: Influence of Training Data Distributions
    Lou, Yang
    Wu, Chengpei
    Li, Junli
    Wang, Lin
    Chen, Guanrong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) : 13496 - 13507
  • [9] Robustness of Adaptive Neural Network Optimization Under Training Noise
    Chaudhury, Subhajit
    Yamasaki, Toshihiko
    IEEE ACCESS, 2021, 9 : 37039 - 37053
  • [10] REINFORCING THE ROBUSTNESS OF A DEEP NEURAL NETWORK TO ADVERSARIAL EXAMPLES BY USING COLOR QUANTIZATION OF TRAINING IMAGE DATA
    Miyazato, Shuntaro
    Wang, Xueting
    Yamasaki, Toshihiko
    Aizawa, Kiyoharu
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 884 - 888