Variable selection for model-based clustering using the integrated complete-data likelihood

被引:0
|
作者
Matthieu Marbac
Mohammed Sedki
机构
[1] McMaster Univeristy,Department of Mathematics and Statistics
[2] INSERM U1181 and University of Paris Sud,undefined
来源
Statistics and Computing | 2017年 / 27卷
关键词
Gaussian mixture model; Information criterion; Integrated complete-data likelihood; Model-based clustering; Variable selection;
D O I
暂无
中图分类号
学科分类号
摘要
Variable selection in cluster analysis is important yet challenging. It can be achieved by regularization methods, which realize a trade-off between the clustering accuracy and the number of selected variables by using a lasso-type penalty. However, the calibration of the penalty term can suffer from criticisms. Model selection methods are an efficient alternative, yet they require a difficult optimization of an information criterion which involves combinatorial problems. First, most of these optimization algorithms are based on a suboptimal procedure (e.g. stepwise method). Second, the algorithms are often computationally expensive because they need multiple calls of EM algorithms. Here we propose to use a new information criterion based on the integrated complete-data likelihood. It does not require the maximum likelihood estimate and its maximization appears to be simple and computationally efficient. The original contribution of our approach is to perform the model selection without requiring any parameter estimation. Then, parameter inference is needed only for the unique selected model. This approach is used for the variable selection of a Gaussian mixture model with conditional independence assumed. The numerical experiments on simulated and benchmark datasets show that the proposed method often outperforms two classical approaches for variable selection. The proposed approach is implemented in the R package VarSelLCM available on CRAN.
引用
收藏
页码:1049 / 1063
页数:14
相关论文
共 50 条