A Unified Framework for High-Dimensional Analysis of M-Estimators with Decomposable Regularizers

被引:573
|
作者
Negahban, Sahand N. [1 ]
Ravikumar, Pradeep [2 ]
Wainwright, Martin J. [3 ,4 ]
Yu, Bin [3 ,4 ]
机构
[1] MIT, Dept EECS, Cambridge, MA 02139 USA
[2] Univ Texas Austin, Dept Comp Sci, Austin, TX 78712 USA
[3] Univ Calif Berkeley, Dept Stat, Berkeley, CA 94720 USA
[4] Univ Calif Berkeley, Dept EECS, Berkeley, CA 94720 USA
基金
美国国家科学基金会;
关键词
High-dimensional statistics; M-estimator; Lasso; group Lasso; sparsity; l(1)-regularization; nuclear norm; MODEL SELECTION; VARIABLE SELECTION; SPARSITY RECOVERY; MATRIX COMPLETION; GROUP LASSO; ROBUST PCA; REGRESSION; CONSISTENCY; RATES; APPROXIMATION;
D O I
10.1214/12-STS400
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless p/n -> 0, a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and structured matrices, low-rank matrices and combinations thereof. In such settings, a general approach to estimation is to solve a regularized optimization problem, which combines a loss function measuring how well the model fits the data with some regularization function that encourages the assumed structure. This paper provides a unified framework for establishing consistency and convergence rates for such regularized M-estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive some existing results, and also to obtain a number of new results on consistency and convergence rates, in both l(2)-error and related norms. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure corresponding regularized M-estimators have fast convergence rates and which are optimal in many well-studied cases.
引用
收藏
页码:538 / 557
页数:20
相关论文
共 50 条
  • [1] Precise High-dimensional Error Analysis of Regularized M-Estimators
    Thrampoulidis, Christos
    Abbasi, Ehsan
    Hassibi, Babak
    [J]. 2015 53RD ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2015, : 410 - 417
  • [2] High-dimensional nonconvex LASSO-type M-estimators
    Beyhum, Jad
    Portier, Francois
    [J]. JOURNAL OF MULTIVARIATE ANALYSIS, 2024, 202
  • [3] Generalized M-estimators for high-dimensional Tobit I models
    Bradic, Jelena
    Guo, Jiaqi
    [J]. ELECTRONIC JOURNAL OF STATISTICS, 2019, 13 (01): : 582 - 645
  • [4] STATISTICAL CONSISTENCY AND ASYMPTOTIC NORMALITY FOR HIGH-DIMENSIONAL ROBUST M-ESTIMATORS
    Loh, Po-Ling
    [J]. ANNALS OF STATISTICS, 2017, 45 (02): : 866 - 896
  • [5] High-dimensional robust approximated M-estimators for mean regression with asymmetric data
    Luo, Bin
    Gao, Xiaoli
    [J]. JOURNAL OF MULTIVARIATE ANALYSIS, 2022, 192
  • [6] Large Dimensional Analysis of Robust M-Estimators of Covariance With Outliers
    Morales-Jimenez, David
    Couillet, Romain
    McKay, Matthew R.
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (21) : 5784 - 5797
  • [7] Precise Error Analysis of Regularized M-Estimators in High Dimensions
    Thrampoulidis, Christos
    Abbasi, Ehsan
    Hassibi, Babak
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2018, 64 (08) : 5592 - 5628
  • [8] VARIATIONAL ANALYSIS OF CONSTRAINED M-ESTIMATORS
    Royset, Johannes O.
    Wets, Roger J-B
    [J]. ANNALS OF STATISTICS, 2020, 48 (05): : 2759 - 2790
  • [9] A review on robust M-estimators for regression analysis
    de Menezes, D. Q. F.
    Prata, D. M.
    Secchi, A. R.
    Pinto, J. C.
    [J]. COMPUTERS & CHEMICAL ENGINEERING, 2021, 147 (147)
  • [10] A Massive Data Framework for M-Estimators with Cubic-Rate
    Shi, Chengchun
    Lu, Wenbin
    Song, Rui
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2018, 113 (524) : 1698 - 1709