HOW DO NOISE TAILS IMPACT ON DEEP RELU NETWORKS?

被引:1
|
作者
Fan, Jianqian [1 ]
Gu, Yihong [1 ]
Zhou, Wen-Xin [2 ]
机构
[1] Princeton Univ, Dept Operat Res & Financial Engn, Princeton, NJ 08544 USA
[2] Univ Illinois, Dept Informat & Decis Sci, Chicago, IL USA
来源
ANNALS OF STATISTICS | 2024年 / 52卷 / 04期
关键词
Robustness; truncation; heavy tails; optimal rates; approximablility of ReLU net-; GEOMETRIZING RATES; CONVERGENCE-RATES; NEURAL-NETWORKS; REGRESSION; APPROXIMATION; ROBUST; BOUNDS;
D O I
10.1214/24-AOS2428
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
This paper investigates the stability of deep ReLU neural networks for nonparametric regression under the assumption that the noise has only a finite pth moment. We unveil how the optimal rate of convergence depends on p, the degree of smoothness and the intrinsic dimension in a class of nonparametric regression functions with hierarchical composition structure when both the adaptive Huber loss and deep ReLU neural networks are used. This optimal rate of convergence cannot be obtained by the ordinary least squares but can be achieved by the Huber loss with a properly chosen parameter that adapts to the sample size, smoothness, and moment parameters. A concentration inequality for the adaptive Huber ReLU neural network estimators with allowable optimization errors is also derived. To establish a matching lower bound within the class of neural network estimators using the Huber loss, we employ a different strategy from the traditional route: constructing a deep ReLU network estimator that has a better empirical loss than the true function and the difference between these two functions furnishes a low bound. This step is related to the Huberization bias, yet more critically to the approximability of deep ReLU networks. As a result, we also contribute some new results on the approximation theory of deep ReLU neural networks.
引用
收藏
页码:1845 / 1871
页数:27
相关论文
共 50 条
  • [31] On the uniform approximation estimation of deep ReLU networks via frequency decomposition
    Chen, Liang
    Liu, Wenjun
    AIMS MATHEMATICS, 2022, 7 (10): : 19018 - 19025
  • [32] Deep ReLU networks and high-order finite element methods
    Opschoor, Joost A. A.
    Petersen, Philipp C.
    Schwab, Christoph
    ANALYSIS AND APPLICATIONS, 2020, 18 (05) : 715 - 770
  • [33] Gradient descent optimizes over-parameterized deep ReLU networks
    Zou, Difan
    Cao, Yuan
    Zhou, Dongruo
    Gu, Quanquan
    MACHINE LEARNING, 2020, 109 (03) : 467 - 492
  • [34] New Error Bounds for Deep ReLU Networks Using Sparse Grids
    Montanelli, Hadrien
    Du, Qiang
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2019, 1 (01): : 78 - 92
  • [35] HOW DO TADPOLES LOSE THEIR TAILS DURING METAMORPHOSIS
    YOSHIZATO, K
    ZOOLOGICAL SCIENCE, 1986, 3 (02) : 219 - 226
  • [36] Efficient Approximation of Deep ReLU Networks for Functions on Low Dimensional Manifolds
    Chen, Minshuo
    Jiang, Haoming
    Liao, Wenjing
    Zhao, Tuo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [37] NONPARAMETRIC REGRESSION USING DEEP NEURAL NETWORKS WITH RELU ACTIVATION FUNCTION
    Schmidt-Hieber, Johannes
    ANNALS OF STATISTICS, 2020, 48 (04): : 1875 - 1897
  • [38] Approximation in shift-invariant spaces with deep ReLU neural networks
    Yang, Yunfei
    Li, Zhen
    Wang, Yang
    NEURAL NETWORKS, 2022, 153 : 269 - 281
  • [39] Trajectory growth lower bounds for random sparse deep ReLU networks
    Price, Ilan
    Tanner, Jared
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 1004 - 1009
  • [40] Error bounds for approximations with deep ReLU neural networks in Ws,p norms
    Guehring, Ingo
    Kutyniok, Gitta
    Petersen, Philipp
    ANALYSIS AND APPLICATIONS, 2020, 18 (05) : 803 - 859