HOW DO NOISE TAILS IMPACT ON DEEP RELU NETWORKS?

被引:1
|
作者
Fan, Jianqian [1 ]
Gu, Yihong [1 ]
Zhou, Wen-Xin [2 ]
机构
[1] Princeton Univ, Dept Operat Res & Financial Engn, Princeton, NJ 08544 USA
[2] Univ Illinois, Dept Informat & Decis Sci, Chicago, IL USA
来源
ANNALS OF STATISTICS | 2024年 / 52卷 / 04期
关键词
Robustness; truncation; heavy tails; optimal rates; approximablility of ReLU net-; GEOMETRIZING RATES; CONVERGENCE-RATES; NEURAL-NETWORKS; REGRESSION; APPROXIMATION; ROBUST; BOUNDS;
D O I
10.1214/24-AOS2428
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
This paper investigates the stability of deep ReLU neural networks for nonparametric regression under the assumption that the noise has only a finite pth moment. We unveil how the optimal rate of convergence depends on p, the degree of smoothness and the intrinsic dimension in a class of nonparametric regression functions with hierarchical composition structure when both the adaptive Huber loss and deep ReLU neural networks are used. This optimal rate of convergence cannot be obtained by the ordinary least squares but can be achieved by the Huber loss with a properly chosen parameter that adapts to the sample size, smoothness, and moment parameters. A concentration inequality for the adaptive Huber ReLU neural network estimators with allowable optimization errors is also derived. To establish a matching lower bound within the class of neural network estimators using the Huber loss, we employ a different strategy from the traditional route: constructing a deep ReLU network estimator that has a better empirical loss than the true function and the difference between these two functions furnishes a low bound. This step is related to the Huberization bias, yet more critically to the approximability of deep ReLU networks. As a result, we also contribute some new results on the approximation theory of deep ReLU neural networks.
引用
收藏
页码:1845 / 1871
页数:27
相关论文
共 50 条
  • [1] Convergence of deep ReLU networks
    Xu, Yuesheng
    Zhang, Haizhang
    NEUROCOMPUTING, 2024, 571
  • [2] Nonlinear Approximation and (Deep) ReLU Networks
    Daubechies, I.
    DeVore, R.
    Foucart, S.
    Hanin, B.
    Petrova, G.
    CONSTRUCTIVE APPROXIMATION, 2022, 55 (01) : 127 - 172
  • [3] Error bounds for approximations with deep ReLU networks
    Yarotsky, Dmitry
    NEURAL NETWORKS, 2017, 94 : 103 - 114
  • [4] Approximation in LP(μ) with deep ReLU neural networks
    Voigtlaender, Felix
    Petersen, Philipp
    2019 13TH INTERNATIONAL CONFERENCE ON SAMPLING THEORY AND APPLICATIONS (SAMPTA), 2019,
  • [5] Vanishing Curvature in Randomly Initialized Deep ReLU Networks
    Orvieto, Antonio
    Kohler, Jonas
    Pavllo, Dario
    Hofmann, Thomas
    Lucchi, Aurelien
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [6] Approximation of Nonlinear Functionals Using Deep ReLU Networks
    Song, Linhao
    Fan, Jun
    Chen, Di-Rong
    Zhou, Ding-Xuan
    JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS, 2023, 29 (04)
  • [7] Approximation of Nonlinear Functionals Using Deep ReLU Networks
    Linhao Song
    Jun Fan
    Di-Rong Chen
    Ding-Xuan Zhou
    Journal of Fourier Analysis and Applications, 2023, 29
  • [8] A generative model for fBm with deep ReLU neural networks
    Allouche, Michaël
    Girard, Stéphane
    Gobet, Emmanuel
    Journal of Complexity, 2022, 73
  • [9] Unboundedness of Linear Regions of Deep ReLU Neural Networks
    Ponomarchuk, Anton
    Koutschan, Christoph
    Moser, Bernhard
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2022 WORKSHOPS, 2022, 1633 : 3 - 10
  • [10] On a Fitting of a Heaviside Function by Deep ReLU Neural Networks
    Hagiwara, Katsuyuki
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT I, 2018, 11301 : 59 - 69