Finite mixture regression: A sparse variable selection by model selection for clustering

被引:14
|
作者
Devijver, Emilie [1 ]
机构
[1] Univ Paris Saclay, CNRS, Univ Paris 11, Lab Math Orsay, F-91405 Orsay, France
来源
ELECTRONIC JOURNAL OF STATISTICS | 2015年 / 9卷 / 02期
关键词
Variable selection; finite mixture regression; non-asymptotic penalized criterion; l(1)-regularized method; LASSO; RATES;
D O I
10.1214/15-EJS1082
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
We consider a finite mixture of Gaussian regression models for high-dimensional data, where the number of covariates may be much larger than the sample size. We propose to estimate the unknown conditional mixture density by a maximum likelihood estimator, restricted on relevant variables selected by an l(1)-penalized maximum likelihood estimator. We get an oracle inequality satisfied by this estimator with a Jensen-Kullback-Leibler type loss. Our oracle inequality is deduced from a general model selection theorem for maximum likelihood estimators on a random model subcollection. We can derive the penalty shape of the criterion, which depends on the complexity of the random model collection.
引用
收藏
页码:2642 / 2674
页数:33
相关论文
共 50 条