NEW IMPROVED CRITERION FOR MODEL SELECTION IN SPARSE HIGH-DIMENSIONAL LINEAR REGRESSION MODELS

被引:2
|
作者
Gohain, Prakash B. [1 ]
Jansson, Magnus [1 ]
机构
[1] KTH Royal Inst Technol, Div Informat Sci & Engn, Stockholm, Sweden
基金
欧洲研究理事会;
关键词
High-dimensional inference; model selection; Lasso; OMP; sparse estimation; subset selection;
D O I
10.1109/ICASSP43922.2022.9746867
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Extended Bayesian information criterion (EBIC) and extended Fisher information criterion (EFIC) are two popular criteria for model selection in sparse high-dimensional linear regression models. However, EBIC is inconsistent in scenarios when the signal-to-noise-ratio (SNR) is high but the sample size is small, and EFIC is not invariant to data scaling, which affects its performance under different signal and noise statistics. In this paper, we present a refined criterion called EBICR where the 'R' stands for robust. EBICR is an improved version of EBIC and EFIC. It is scale-invariant and a consistent estimator of the true model as the sample size grows large and/or when the SNR tends to infinity. The performance of EBICR is compared to existing methods such as EBIC, EFIC and multi-beta-test (MBT). Simulation results indicate that the performance of EBICR in identifying the true model is either at par or superior to that of the other considered methods.
引用
收藏
页码:5692 / 5696
页数:5
相关论文
共 50 条
  • [31] Sparse High-Dimensional Isotonic Regression
    Gamarnik, David
    Gaudio, Julia
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [32] Minimax Sparse Logistic Regression for Very High-Dimensional Feature Selection
    Tan, Mingkui
    Tsang, Ivor W.
    Wang, Li
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2013, 24 (10) : 1609 - 1622
  • [33] NEARLY OPTIMAL MINIMAX ESTIMATOR FOR HIGH-DIMENSIONAL SPARSE LINEAR REGRESSION
    Zhang, Li
    [J]. ANNALS OF STATISTICS, 2013, 41 (04): : 2149 - 2175
  • [34] Robust and sparse estimation methods for high-dimensional linear and logistic regression
    Kurnaz, Fatma Sevinc
    Hoffmann, Irene
    Filzmoser, Peter
    [J]. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2018, 172 : 211 - 222
  • [35] Penalised robust estimators for sparse and high-dimensional linear models
    Umberto Amato
    Anestis Antoniadis
    Italia De Feis
    Irene Gijbels
    [J]. Statistical Methods & Applications, 2021, 30 : 1 - 48
  • [36] Penalised robust estimators for sparse and high-dimensional linear models
    Amato, Umberto
    Antoniadis, Anestis'
    De Feis, Italia
    Gijbels, Irene
    [J]. STATISTICAL METHODS AND APPLICATIONS, 2021, 30 (01): : 1 - 48
  • [37] Feature selection in finite mixture of sparse normal linear models in high-dimensional feature space
    Khalili, Abbas
    Chen, Jiahua
    Lin, Shili
    [J]. BIOSTATISTICS, 2011, 12 (01) : 156 - 172
  • [38] The sparsity and bias of the lasso selection in high-dimensional linear regression
    Zhang, Cun-Hui
    Huang, Jian
    [J]. ANNALS OF STATISTICS, 2008, 36 (04): : 1567 - 1594
  • [39] Model selection in sparse high-dimensional vine copula models with an application to portfolio risk
    Nagler, T.
    Bumann, C.
    Czado, C.
    [J]. JOURNAL OF MULTIVARIATE ANALYSIS, 2019, 172 : 180 - 192
  • [40] Cluster feature selection in high-dimensional linear models
    Lin, Bingqing
    Pang, Zhen
    Wang, Qihua
    [J]. RANDOM MATRICES-THEORY AND APPLICATIONS, 2018, 7 (01)