The estimation of prediction error: Covariance penalties and cross-validation - Rejoinder

被引:0
|
作者
Efron, B
机构
[1] Department of Statistics, Stanford University, Stanford
关键词
C [!sub]p[!/sub; Degrees of freedom; Nonparametric estimates; Parametric bootstrap; Rao-Blackwellization; SURE;
D O I
10.1198/016214504000000917
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Having constructed a data-based estimation rule, perhaps a logistic regression or a classification tree, the statistician would like to know its performance as a predictor of future cases. There are two main theories concerning prediction error: (1) penalty methods such as C p, Akaike's information criterion, and Stein's unbiased risk estimate that depend on the covariance between data points and their corresponding predictions; and (2) cross-validation and related nonparametric bootstrap techniques. This article concerns the connection between the two theories. A Rao-Blackwell type of relation is derived in which nonparametric methods such as cross-validation are seen to be randomized versions of their covariance penalty counterparts. The model-based penalty methods offer substantially better accuracy, assuming that the model is believable.
引用
收藏
页码:640 / 642
页数:3
相关论文
共 50 条