Variables selection using L0 penalty

被引:0
|
作者
Zhang, Tonglin [1 ]
机构
[1] Purdue Univ, Dept Stat, 150 North Univ St, W Lafayette, IN 47907 USA
关键词
Consistency; Generalized information criterion; Generalized linear models; High -dimensional data; Model size; Penalized maximum likelihood; CENTRAL LIMIT-THEOREMS; TUNING PARAMETER SELECTION; REGRESSION; REGULARIZATION; SUBSET; MODELS; LASSO;
D O I
10.1016/j.csda.2023.107860
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The determination of a tuning parameter by the generalized information criterion (GIC) is considered an important issue in variable selection. It is shown that the GIC and the G0 penalized objective functions are equivalent, leading to a new G0 penalized maximum likelihood method for high-dimensional generalized linear models in this article. Based on the technique of the well-known discrete optimization problem in theoretical computer science, a two-step algorithm for local solutions is proposed. The first step optimizes the G0 penalized objective function under a given model size, where only a maximum likelihood algorithm is needed. The second step optimizes the G0 penalized objective function under a candidate set of model sizes, where only the GIC is needed. As the tuning parameter can be fixed, the selection of the tuning parameter can be ignored in the proposed method. The theoretical study shows that the algorithm is polynomial and any resulting local solution is consistent. Thus, it is not necessary to use the global solution in practice. The numerical studies show that the proposed method outperforms its competitors in general.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Variable selection and estimation using a continuous approximation to the L0 penalty
    Wang, Yanxin
    Fan, Qibin
    Zhu, Li
    [J]. ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 2018, 70 (01) : 191 - 214
  • [2] SPARSE AND LOW RANK DECOMPOSITION USING l0 PENALTY
    Ulfarsson, M. O.
    Solo, V.
    Marjanovic, G.
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP), 2015, : 3312 - 3316
  • [3] SPARSE LOADING NOISY PCA USING AN l0 PENALTY
    Ulfarsson, M. O.
    Solo, V.
    [J]. 2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2012, : 3597 - 3600
  • [4] SMOOTH AND SPARSE HYPERSPECTRAL UNMIXING USING AN l0 PENALTY
    Sigurdsson, Jakob
    Ulfarsson, Magnus O.
    Sveinsson, Johannes R.
    [J]. 2013 5TH WORKSHOP ON HYPERSPECTRAL IMAGE AND SIGNAL PROCESSING: EVOLUTION IN REMOTE SENSING (WHISPERS), 2013,
  • [5] SPARSE VARIABLE NOISY PCA USING l0 PENALTY
    Ulfarsson, M. O.
    Solo, V.
    [J]. 2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2010, : 3950 - 3953
  • [6] Efficient Regularized Regression with L0 Penalty for Variable Selection and Network Construction
    Liu, Zhenqiu
    Li, Gang
    [J]. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE, 2016, 2016
  • [7] Variable selection and estimation in generalized linear models with the seamless L0 penalty
    Li, Zilin
    Wang, Sijian
    Lin, Xihong
    [J]. CANADIAN JOURNAL OF STATISTICS-REVUE CANADIENNE DE STATISTIQUE, 2012, 40 (04): : 745 - 769
  • [8] Scalable network estimation with L0 penalty
    Kim, Junghi
    Zhu, Hongtu
    Wang, Xiao
    Do, Kim-Anh
    [J]. STATISTICAL ANALYSIS AND DATA MINING, 2021, 14 (01) : 18 - 30
  • [9] Detecting Changes in Slope With an L0 Penalty
    Fearnhead, Paul
    Maidstone, Robert
    Letchford, Adam
    [J]. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2019, 28 (02) : 265 - 275
  • [10] Deconvolution of pulse trains with the L0 penalty
    de Rooi, Johan
    Eilers, Paul
    [J]. ANALYTICA CHIMICA ACTA, 2011, 705 (1-2) : 218 - 226