Error bounds for the convex loss Lasso in linear models

被引:1
|
作者
Hannay, Mark [1 ]
Deleamont, Pierre-Yves [2 ,3 ]
机构
[1] Nanyang Technol Univ, Div Math Sci, Sch Phys & Math Sci, Singapore 637371, Singapore
[2] Univ Geneva, Res Ctr Stat, Blv Pont Arve 40, CH-1211 Geneva, Switzerland
[3] Univ Geneva, Geneva Sch Econ & Management, Blv Pont Arve 40, CH-1211 Geneva, Switzerland
来源
ELECTRONIC JOURNAL OF STATISTICS | 2017年 / 11卷 / 02期
关键词
Robust Lasso; high dimensions; error bounds; joint scale and location estimation; REGRESSION SHRINKAGE; VARIABLE SELECTION; ROBUST REGRESSION; PERSISTENCE;
D O I
10.1214/17-EJS1304
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In this paper we investigate error bounds for convex loss functions for the Lasso in linear models, by first establishing a gap in the theory with respect to the existing error bounds. Then, under the compatibility condition, we recover bounds for the absolute value estimation error and the squared prediction error under mild conditions, which appear to be far more appropriate than the existing bounds for the convex loss Lasso. Interestingly, asymptotically the only difference between the new bounds of the convex loss Lasso and the classical Lasso is a term solely depending on a well-known expression in the robust statistics literature appearing multiplicatively in the bounds. We show that this result holds whether or not the scale parameter needs to be estimated jointly with the regression coefficients. Finally, we use the ratio to optimize our bounds in terms of minimaxity.
引用
收藏
页码:2832 / 2875
页数:44
相关论文
共 50 条