Targeted cross-validation

被引:1
|
作者
Zhang, Jiawei [1 ]
Ding, Jie [1 ]
Yang, Yuhong [1 ]
机构
[1] Univ Minnesota, Sch Stat, Minneapolis, MN 55455 USA
基金
美国国家科学基金会;
关键词
Consistency; cross-validation; model selection; regression; MODEL-SELECTION; VARIABLE SELECTION; CONSISTENCY; CONVERGENCE; RATES;
D O I
10.3150/22-BEJ1461
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In many applications, we have access to the complete dataset but are only interested in the prediction of a particular region of predictor variables. A standard approach is to find the globally best modeling method from a set of candidate methods. However, it is perhaps rare in reality that one candidate method is uniformly better than the others. A natural approach for this scenario is to apply a weighted L2 loss in performance assessment to reflect the region-specific interest. We propose a targeted cross-validation (TCV) to select models or procedures based on a general weighted L2 loss. We show that the TCV is consistent in selecting the best performing candidate under the weighted L2 loss. Experimental studies are used to demonstrate the use of TCV and its potential advantage over the global CV or the approach of using only local data for modeling a local region. Previous investigations on CV have relied on the condition that when the sample size is large enough, the rank-ing of two candidates stays the same. However, in many applications with the setup of changing data-generating processes or highly adaptive modeling methods, the relative performance of the methods is not static as the sample size varies. Even with a fixed data-generating process, it is possible that the ranking of two methods switches infinitely many times. In this work, we broaden the concept of the selection consistency by allowing the best can-didate to switch as the sample size varies, and then establish the consistency of the TCV. This flexible framework can be applied to high-dimensional and complex machine learning scenarios where the relative performances of modeling procedures are dynamic.
引用
收藏
页码:377 / 402
页数:26
相关论文
共 50 条
  • [21] RORSCHACH RELIABILITY - CROSS-VALIDATION
    DECATO, CM
    [J]. PERCEPTUAL AND MOTOR SKILLS, 1983, 56 (01) : 11 - 14
  • [22] CROSS-VALIDATION OF GORDON SIV
    MORRIS, BB
    [J]. PERCEPTUAL AND MOTOR SKILLS, 1968, 27 (01) : 44 - &
  • [23] No free lunch for cross-validation
    Zhu, HY
    Rohwer, R
    [J]. NEURAL COMPUTATION, 1996, 8 (07) : 1421 - 1426
  • [24] The uncertainty principle of cross-validation
    Last, Mark
    [J]. 2006 IEEE International Conference on Granular Computing, 2006, : 275 - 280
  • [25] Cross-validation on extreme regions
    Aghbalou, Anass
    Bertail, Patrice
    Portier, Francois
    Sabourin, Anne
    [J]. EXTREMES, 2024,
  • [26] Experience with a cross-validation approach
    D. Gansser
    [J]. Chromatographia, 2002, 55 : S71 - S74
  • [27] CROSS-VALIDATION AND MULTINOMIAL PREDICTION
    STONE, M
    [J]. BIOMETRIKA, 1974, 61 (03) : 509 - 515
  • [28] Cross-validation is safe to use
    King, Ross D.
    Orhobor, Oghenejokpeme I.
    Taylor, Charles C.
    [J]. NATURE MACHINE INTELLIGENCE, 2021, 3 (04) : 276 - 276
  • [29] Cross-Validation for Correlated Data
    Rabinowicz, Assaf
    Rosset, Saharon
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2022, 117 (538) : 718 - 731
  • [30] Cross-validation and median criterion
    Zheng, ZG
    Yang, Y
    [J]. STATISTICA SINICA, 1998, 8 (03) : 907 - 921