On the use of cross-validation for the calibration of the adaptive lasso

被引:1
|
作者
Ballout, Nadim [1 ]
Etievant, Lola [1 ,2 ]
Viallon, Vivian [3 ]
机构
[1] Univ Lyon, Univ Eiffel, Univ Lyon 1, IFSTTAR, Bron, France
[2] Univ Claude Bernard Lyon 1, Inst Camille Jordan, Lyon, France
[3] WHO, Nutr & Metab Branch, Int Agcy Res Canc IARC, Lyon, France
关键词
adaptive lasso; calibration; cross-validation; one-step lasso; tuning parameter; MODEL SELECTION; REGRESSION;
D O I
10.1002/bimj.202200047
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Cross-validation is the standard method for hyperparameter tuning, or calibration, of machine learning algorithms. The adaptive lasso is a popular class of penalized approaches based on weighted L-1-norm penalties, with weights derived from an initial estimate of the model parameter. Although it violates the paramount principle of cross-validation, according to which no information from the hold-out test set should be used when constructing the model on the training set, a "naive" cross-validation scheme is often implemented for the calibration of the adaptive lasso. The unsuitability of this naive cross-validation scheme in this context has not been well documented in the literature. In this work, we recall why the naive scheme is theoretically unsuitable and how proper cross-validation should be implemented in this particular context. Using both synthetic and real-world examples and considering several versions of the adaptive lasso, we illustrate the flaws of the naive scheme in practice. In particular, we show that it can lead to the selection of adaptive lasso estimates that perform substantially worse than those selected via a proper scheme in terms of both support recovery and prediction error. In other words, our results show that the theoretical unsuitability of the naive scheme translates into suboptimality in practice, and call for abandoning it.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] LASSO with cross-validation for genomic selection
    Usai, M. Graziano
    Goddard, Mike E.
    Hayes, Ben J.
    [J]. GENETICS RESEARCH, 2009, 91 (06) : 427 - 436
  • [2] Stabilizing the lasso against cross-validation variability
    Roberts, S.
    Nowak, G.
    [J]. COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2014, 70 : 198 - 211
  • [3] A Note on Cross-Validation for Lasso Under Measurement Errors
    Datta, Abhirup
    Zou, Hui
    [J]. TECHNOMETRICS, 2020, 62 (04) : 549 - 556
  • [4] RISK CONSISTENCY OF CROSS-VALIDATION WITH LASSO-TYPE PROCEDURES
    Homrighausen, Darren
    McDonald, Daniel J.
    [J]. STATISTICA SINICA, 2017, 27 (03) : 1017 - 1036
  • [5] Cross-validation is safe to use
    King, Ross D.
    Orhobor, Oghenejokpeme I.
    Taylor, Charles C.
    [J]. NATURE MACHINE INTELLIGENCE, 2021, 3 (04) : 276 - 276
  • [6] Cross-validation is safe to use
    Ross D. King
    Oghenejokpeme I. Orhobor
    Charles C. Taylor
    [J]. Nature Machine Intelligence, 2021, 3 : 276 - 276
  • [7] Calibration and cross-validation of MCCB and CogState in schizophrenia
    Lees, Jane
    Applegate, Eve
    Emsley, Richard
    Lewis, Shon
    Michalopoulou, Panayiota
    Collier, Tracey
    Lopez-Lopez, Cristina
    Kapur, Shitij
    Pandina, Gahan J.
    Drake, Richard J.
    [J]. PSYCHOPHARMACOLOGY, 2015, 232 (21-22) : 3873 - 3882
  • [8] CALIBRATION AND CROSS-VALIDATION OF MCCB AND COGSTATE IN SCHIZOPHRENIA
    Lees, Jane
    Applegate, Eve
    Drake, Richard
    Lewis, Shon
    [J]. SCHIZOPHRENIA RESEARCH, 2014, 153 : S222 - S222
  • [9] Calibration and cross-validation of MCCB and CogState in schizophrenia
    Jane Lees
    Eve Applegate
    Richard Emsley
    Shôn Lewis
    Panayiota Michalopoulou
    Tracey Collier
    Cristina Lopez-Lopez
    Shitij Kapur
    Gahan J. Pandina
    Richard J. Drake
    [J]. Psychopharmacology, 2015, 232 : 3873 - 3882
  • [10] Leave-one-out cross-validation is risk consistent for lasso
    Darren Homrighausen
    Daniel J. McDonald
    [J]. Machine Learning, 2014, 97 : 65 - 78