CONVERGENCE RATES OF LEAST SQUARES REGRESSION ESTIMATORS WITH HEAVY-TAILED ERRORS

被引:20
|
作者
Han, Qiyang [1 ]
Wellner, Jon A. [1 ]
机构
[1] Univ Washington, Dept Stat, Box 354322, Seattle, WA 98195 USA
来源
ANNALS OF STATISTICS | 2019年 / 47卷 / 04期
关键词
Multiplier empirical process; multiplier inequality; nonparametric regression; least squares estimation; sparse linear regression; heavy-tailed errors; CENTRAL-LIMIT-THEOREM; MINIMAX RATES; RISK BOUNDS; MOMENT; INEQUALITIES; EIGENVALUE; SELECTION; TESTS;
D O I
10.1214/18-AOS1748
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
We study the performance of the least squares estimator (LSE) in a general nonparametric regression model, when the errors are independent of the covariates but may only have a pth moment (p >= 1). In such a heavy-tailed regression setting, we show that if the model satisfies a standard "entropy condition" with exponent alpha is an element of (0, 2), then the L-2 loss of the LSE converges at a rate O-P(n(-1/2+alpha) boolean OR n(-1/2+1/2p)). Such a rate cannot be improved under the entropy condition alone. This rate quantifies both some positive and negative aspects of the LSE in a heavy-tailed regression setting. On the positive side, as long as the errors have p >= 1 + 2/alpha moments, the L-2 loss of the LSE converges at the same rate as if the errors are Gaussian. On the negative side, if p < 1 + 2/alpha, there are (many) hard models at any entropy level alpha for which the L-2 loss of the LSE converges at a strictly slower rate than other robust estimators. The validity of the above rate relies crucially on the independence of the covariates and the errors. In fact, the L-2 loss of the LSE can converge arbitrarily slowly when the independence fails. The key technical ingredient is a new multiplier inequality that gives sharp bounds for the "multiplier empirical process" associated with the LSE. We further give an application to the sparse linear regression model with heavy-tailed covariates and errors to demonstrate the scope of this new inequality.
引用
收藏
页码:2286 / 2319
页数:34
相关论文
共 50 条
  • [1] ON LEAST SQUARES ESTIMATION UNDER HETEROSCEDASTIC AND HEAVY-TAILED ERRORS
    Kuchibhotla, Arun K.
    Patra, Rohit K.
    [J]. ANNALS OF STATISTICS, 2022, 50 (01): : 277 - 302
  • [2] Convergence Rates for Penalized Least Squares Estimators in PDE Constrained Regression Problems
    Nickl, Richard
    van de Geer, Sara
    Wang, Sven
    [J]. SIAM-ASA JOURNAL ON UNCERTAINTY QUANTIFICATION, 2020, 8 (01): : 374 - 413
  • [3] Least-squares estimation of GARCH(1,1) models with heavy-tailed errors
    Preminger, Arie
    Storti, Giuseppe
    [J]. ECONOMETRICS JOURNAL, 2017, 20 (02): : 221 - 258
  • [4] NEAREST-NEIGHBOR REGRESSION WITH HEAVY-TAILED ERRORS
    MUKERJEE, H
    [J]. ANNALS OF STATISTICS, 1993, 21 (02): : 681 - 693
  • [5] Algorithmic Stability of Heavy-Tailed Stochastic Gradient Descent on Least Squares
    Raj, Anant
    Barsbey, Melih
    Gurbuzbalaban, Mert
    Zhu, Lingjiong
    Simsekli, Umut
    [J]. INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 201, 2023, 201 : 1292 - 1342
  • [6] Nonparametric quantile regression with heavy-tailed and strongly dependent errors
    Toshio Honda
    [J]. Annals of the Institute of Statistical Mathematics, 2013, 65 : 23 - 47
  • [7] Nonparametric quantile regression with heavy-tailed and strongly dependent errors
    Honda, Toshio
    [J]. ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 2013, 65 (01) : 23 - 47
  • [8] Diagnostics for a class of survival regression models with heavy-tailed errors
    Li, Ai-Ping
    Xie, Feng-Chang
    [J]. COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2012, 56 (12) : 4204 - 4214
  • [9] CONSISTENCIES AND RATES OF CONVERGENCE OF JUMP-PENALIZED LEAST SQUARES ESTIMATORS
    Boysen, Leif
    Kempe, Angela
    Liebscher, Volkmar
    Munk, Axel
    Wittich, Olaf
    [J]. ANNALS OF STATISTICS, 2009, 37 (01): : 157 - 183
  • [10] Convergence Rates of Multivariate Regression Estimators with Errors-In-Variables
    Guo, Huijun
    Liu, Youming
    [J]. NUMERICAL FUNCTIONAL ANALYSIS AND OPTIMIZATION, 2017, 38 (12) : 1564 - 1588