Robust Variable Selection and Regularization in Quantile Regression Based on Adaptive-LASSO and Adaptive E-NET

被引:1
|
作者
Mudhombo, Innocent [1 ]
Ranganai, Edmore [1 ]
机构
[1] Univ South Africa, Dept Stat, Florida Campus,Private Bag X6, ZA-1710 Roodepoort, South Africa
关键词
weighted quantile regression; adaptive LASSO penalty; penalty; adaptive E-NET penalty; collinearity inducing point; collinearity hiding point; collinearity influential points; NONCONCAVE PENALIZED LIKELIHOOD; DIVERGING NUMBER; ASYMPTOTICS;
D O I
10.3390/computation10110203
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Although the variable selection and regularization procedures have been extensively considered in the literature for the quantile regression(QR)scenario via penalization, many such procedures fail to deal with data aberrations in the design space, namely, high leverage points(X-spaceoutliers) and collinearity challenges simultaneously. Some high leverage points referred to as collinearity influential observations tend to adversely alter the eigen structure of the design matrix by inducing or masking collinearity. Therefore, in the literature, it is recommended thatthe problems of collinearity and high leverage points should be dealt with simultaneously. In this article, we suggest adaptiveL ASSO and adaptive E-NET penalizedQR(QR-AL ASSO and QR-AE-NET) procedures where the weights are based on aQRestimator as remedies. We extend this methodology to their penalized weighted QR versions ofW QR-L ASSO,W QR-E-NET procedures we had suggested earlier. In the literature, adaptive weights are based on the RIDGE regression (RR)parameter estimator. Although the use of this estimator may be plausible at the`1estimator (QRat tau=0.5) for the symmetrical distribution, it may not be so at extreme quantile levels. Therefore, we useaQR-based estimator to derive adaptive weights. We carried out a comparative study ofQR-L ASSO,QR-E-NET, and the ones we suggest here, viz.,QR-AL ASSO,QR-AE-NET, weighted QR AL ASSO penalized and weighted QR adaptive AE-NET penalized (W QR-AL ASSOandW QR-AE-NET)procedures. The simulation study results show that QR-AL ASSO,QR-AE-NET,W QR-AL ASSOandW QR-AE-NET generally outperform their non adaptive counterparts. At predictor matriceswith collinearity inducing points under normality, the QR-AL ASSOandQR-AE-NET, respectively, outperform the non-adaptive procedures in the unweighted scenarios, as follows: in all 16 cases(100%) with respect to correctly selected (shrunk) zero coefficients; in 88%with respect to correctlyfitted models; and in 81%with respect to prediction. In the weighted penalizedW QRscenarios,W QR-AL ASSOandW QR-AE-NEToutperform their non-adaptive versions as follows: in 75%ofthe time with respect to both correctly fitted models and correctly shrunk zero coefficients andin 63%with respect to prediction. At predictor matrices with collinearity masking points undernormality, theQR-AL ASSOandQR-AE-NET, respectively, outperform the non-adaptive proceduresin the unweighted scenarios as follows: in prediction, in100%and88%of the time; with respect tocorrectly fitted models in100%and50%(while in50%equally); and with respect to correctly shrunkzero coefficients in100%of the time. In the weighted scenario,W QR-AL ASSOandW QR-AE-NEToutperform their respective non-adaptive versions as follows; with respect to prediction, both in63%of the time; with respect to correctly fitted models, in88%of the time while with respect tocorrectly shrunk zero coefficients in100%of the time. At predictor matrices with collinearity inducingpoints under thet-distribution, theQR-AL ASSOandQR-AE-NETprocedures outperform theirrespective non-adaptive procedures in the unweighted scenarios as follows: in prediction, in100%and75%of the time; with respect to correctly fitted models88%of the time each; and with respect tocorrectly shrunk zero88%and in100%of the time. Additionally, the procedures W QR-AL ASSO and W QR-AE-NETand their unweighted versions result in the former outperforming the latter in allrespective cases with respect to prediction whilst there is no clear "winner " with respect to the other two measures. Overall, theW QR-AL ASSO generally outperforms all other models with respect toall measures. At the predictor matrix with collinearity-masking points under thet-distribution, alladaptive versions outperformed their respective non-adaptive versions with respect to all metrics. In the unweighted scenarios, the QR-AL ASSOandQR-AE-NETdominate their non-adaptive versionsas follows: in prediction, in63%and75%of the time; with respect to correctly fitted models, in100%and38%(while in62%equally); in100%of the time with respect to correctly shrunk zero coefficients.In the weighted scenarios, all adaptive versions outperformed their non-adaptive versions as follows:62%of the time in both respective cases with respect to prediction while it is vice-versa with respectto correctly fitted models and with respect to correctly shrunk zero coefficients. In the weightedscenarios,W QR-AL ASSOandW QR-AE-NETdominate their respective non-adaptive versions asfollows; with respect to correctly fitted models, in62%of the time while with respect to correctlyshrunk zero coefficients in100%of the time in both cases. At the design matrix with both collinearityand high leverage points under the heavy-tailed distributions (t-distributions withd is an element of(1; 6)degreesof freedom) scenarios, the dominance of the adaptive procedures over the non-adaptive ones is againevident. In the unweighted scenarios, the proceduresQR-AL ASSOandQR-AE-NEToutperformtheir non-adaptive versions as follows; in prediction, in75%and62%of the time; with respect tocorrectly fitted models, they perform better in100%and88%of the time, while with respect tocorrectly shrunk zero coefficients, they outperform their non-adaptive ones100%of the time in bothcases. In the weighted scenarios,W QR-AL ASSOandW QR-AE-NETdominate their non-adaptiveversions as follows; with respect to prediction, in100%of the time in both cases; and with respectto both correctly fitted models and correctly shrunk zero coefficients, they both do88%of the time.Results from applications of the suggested procedures to real life data sets are more or less in linewith the simulation studies results.
引用
收藏
页数:24
相关论文
共 50 条
  • [1] Variable selection in composite quantile regression models with adaptive group lasso
    Zhou, Xiaoshuang
    International Journal of Applied Mathematics and Statistics, 2013, 45 (15): : 12 - 19
  • [2] Robust adaptive Lasso for variable selection
    Zheng, Qi
    Gallagher, Colin
    Kulasekera, K. B.
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2017, 46 (09) : 4642 - 4659
  • [3] Adaptive LASSO model selection in a multiphase quantile regression
    Ciuperca, Gabriela
    STATISTICS, 2016, 50 (05) : 1100 - 1131
  • [4] Adaptive elastic net-penalized quantile regression for variable selection
    Yan, Ailing
    Song, Fengli
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2019, 48 (20) : 5106 - 5120
  • [5] Quantile adaptive lasso: variable selection for quantile treatment effect estimation
    Liu, Yahang
    Wei, Kecheng
    Huang, Chen
    Yu, Yongfu
    Qin, Guoyou
    JOURNAL OF STATISTICAL COMPUTATION AND SIMULATION, 2025, 95 (02) : 239 - 257
  • [6] Variable Selection and Model Prediction Based on Lasso, Adaptive Lasso and Elastic Net
    Fan, Lei
    Li, Qun
    Chen, Shuai
    Zhu, Zhouli
    PROCEEDINGS OF 2015 4TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT 2015), 2015, : 579 - 583
  • [7] Robust variable selection based on the random quantile LASSO
    Wang, Yan
    Jiang, Yunlu
    Zhang, Jiantao
    Chen, Zhongran
    Xie, Baojian
    Zhao, Chengxiang
    COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2022, 51 (01) : 29 - 39
  • [8] Bayesian adaptive Lasso quantile regression
    Alhamzawi, Rahim
    Yu, Keming
    Benoit, Dries F.
    STATISTICAL MODELLING, 2012, 12 (03) : 279 - 297
  • [9] Adaptive sparse group LASSO in quantile regression
    Mendez-Civieta, Alvaro
    Aguilera-Morillo, M. Carmen
    Lillo, Rosa E.
    ADVANCES IN DATA ANALYSIS AND CLASSIFICATION, 2021, 15 (03) : 547 - 573
  • [10] Adaptive fused LASSO in grouped quantile regression
    Ciuperca G.
    Journal of Statistical Theory and Practice, 2017, 11 (1) : 107 - 125