Targeted cross-validation

被引:1
|
作者
Zhang, Jiawei [1 ]
Ding, Jie [1 ]
Yang, Yuhong [1 ]
机构
[1] Univ Minnesota, Sch Stat, Minneapolis, MN 55455 USA
基金
美国国家科学基金会;
关键词
Consistency; cross-validation; model selection; regression; MODEL-SELECTION; VARIABLE SELECTION; CONSISTENCY; CONVERGENCE; RATES;
D O I
10.3150/22-BEJ1461
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In many applications, we have access to the complete dataset but are only interested in the prediction of a particular region of predictor variables. A standard approach is to find the globally best modeling method from a set of candidate methods. However, it is perhaps rare in reality that one candidate method is uniformly better than the others. A natural approach for this scenario is to apply a weighted L2 loss in performance assessment to reflect the region-specific interest. We propose a targeted cross-validation (TCV) to select models or procedures based on a general weighted L2 loss. We show that the TCV is consistent in selecting the best performing candidate under the weighted L2 loss. Experimental studies are used to demonstrate the use of TCV and its potential advantage over the global CV or the approach of using only local data for modeling a local region. Previous investigations on CV have relied on the condition that when the sample size is large enough, the rank-ing of two candidates stays the same. However, in many applications with the setup of changing data-generating processes or highly adaptive modeling methods, the relative performance of the methods is not static as the sample size varies. Even with a fixed data-generating process, it is possible that the ranking of two methods switches infinitely many times. In this work, we broaden the concept of the selection consistency by allowing the best can-didate to switch as the sample size varies, and then establish the consistency of the TCV. This flexible framework can be applied to high-dimensional and complex machine learning scenarios where the relative performances of modeling procedures are dynamic.
引用
收藏
页码:377 / 402
页数:26
相关论文
共 50 条
  • [1] Fast Cross-Validation
    Liu, Yong
    Lin, Hailun
    Ding, Lizhong
    Wang, Weiping
    Liao, Shizhong
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2497 - 2503
  • [2] Cross-Validation With Confidence
    Lei, Jing
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2020, 115 (532) : 1978 - 1997
  • [3] Cross-validation methods
    Browne, MW
    [J]. JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2000, 44 (01) : 108 - 132
  • [4] CROSS-VALIDATION FOR PREDICTION
    COOIL, B
    WINER, RS
    RADOS, DL
    [J]. JOURNAL OF MARKETING RESEARCH, 1987, 24 (03) : 271 - 279
  • [5] SMOOTHED CROSS-VALIDATION
    HALL, P
    MARRON, JS
    PARK, BU
    [J]. PROBABILITY THEORY AND RELATED FIELDS, 1992, 92 (01) : 1 - 20
  • [6] PARAMETERS OF CROSS-VALIDATION
    HERZBERG, PA
    [J]. PSYCHOMETRIKA, 1969, 34 (2P2) : 1 - &
  • [7] Cross-validation Revisited
    Dutta, Santanu
    [J]. COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2016, 45 (02) : 472 - 490
  • [8] Purposeful cross-validation: a novel cross-validation strategy for improved surrogate optimizability
    Correia, Daniel
    Wilke, Daniel N.
    [J]. ENGINEERING OPTIMIZATION, 2021, 53 (09) : 1558 - 1573
  • [9] Cross-Validation Without Doing Cross-Validation in Genome-Enabled Prediction
    Gianola, Daniel
    Schoen, Chris-Carolin
    [J]. G3-GENES GENOMES GENETICS, 2016, 6 (10): : 3107 - 3128
  • [10] Cross-validation is dead. Long live cross-validation! Model validation based on resampling
    Knut Baumann
    [J]. Journal of Cheminformatics, 2 (Suppl 1)