Limitations of “Limitations of Bayesian Leave-one-out Cross-Validation for Model Selection”

被引:2
|
作者
Vehtari A. [1 ]
Simpson D.P. [2 ]
Yao Y. [3 ]
Gelman A. [4 ]
机构
[1] Department of Computer Science, Aalto University, Espoo
[2] Department of Statisitcs, University of Toronto, Toronto
[3] Department of Statistics, Columbia University, New York, NY
[4] Department of Statistics and Department of Political Science, Columbia University, New York, NY
基金
芬兰科学院; 加拿大自然科学与工程研究理事会; 美国国家科学基金会;
关键词
M-closed; M-open; Principle of complexity; Reality; Statistical convenience;
D O I
10.1007/s42113-018-0020-6
中图分类号
学科分类号
摘要
In an earlier article in this journal, Gronau and Wagenmakers (2018) discuss some problems with leave-one-out cross-validation (LOO) for Bayesian model selection. However, the variant of LOO that Gronau and Wagenmakers discuss is at odds with a long literature on how to use LOO well. In this discussion, we discuss the use of LOO in practical data analysis, from the perspective that we need to abandon the idea that there is a device that will produce a single-number decision rule. © 2019, The Author(s).
引用
收藏
页码:22 / 27
页数:5
相关论文
共 50 条
  • [31] Efficient leave-one-out cross-validation for Bayesian non-factorized normal and Student-t models
    Burkner, Paul-Christian
    Gabry, Jonah
    Vehtari, Aki
    [J]. COMPUTATIONAL STATISTICS, 2021, 36 (02) : 1243 - 1261
  • [32] Bayesian Leave-One-Out Cross Validation Approximations for Gaussian Latent Variable Models
    Vehtari, Aki
    Mononen, Tommi
    Tolvanen, Ville
    Sivula, Tuomas
    Winther, Ole
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2016, 17
  • [33] Honest leave-one-out cross-validation for estimating post-tuning generalization error
    Wang, Boxiang
    Zou, Hui
    [J]. STAT, 2021, 10 (01):
  • [34] Optimizing Sparse Kernel Ridge Regression Hyperparameters Based on Leave-One-Out Cross-Validation
    Karasuyama, Masayuki
    Nakano, Ryohei
    [J]. 2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 3463 - 3468
  • [35] Efficient approximate k-fold and leave-one-out cross-validation for ridge regression
    Meijer, Rosa J.
    Goeman, Jelle J.
    [J]. BIOMETRICAL JOURNAL, 2013, 55 (02) : 141 - 155
  • [36] A scalable estimate of the out-of-sample prediction error via approximate leave-one-out cross-validation
    Rad, Kamiar Rahnama
    Maleki, Arian
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2020, 82 (04) : 965 - 996
  • [37] Approximate Leave-One-Out Cross Validation for Regression With Regularizers
    Auddy, Arnab
    Zou, Haolin
    Rad, Kamiar Rahnama
    Maleki, Arian
    [J]. IEEE Transactions on Information Theory, 2024, 70 (11) : 8040 - 8071
  • [38] EBM PEARL: LEAVE-ONE-OUT (LOO) CROSS VALIDATION
    Hupert, Jordan
    [J]. JOURNAL OF PEDIATRICS, 2020, 220 : 264 - 264
  • [39] Fault diagnosis model based on least square support vector machine optimized by leave-one-out cross-validation
    Li, Feng
    Tang, Bao-Ping
    Zhang, Guo-Wen
    [J]. Zhendong yu Chongji/Journal of Vibration and Shock, 2010, 29 (09): : 170 - 174
  • [40] Online modeling of kernel extreme learning machine based on fast leave-one-out cross-validation
    [J]. Zhang, Y.-T. (zyt01@mails.tsinghua.edu.cn), 1600, Shanghai Jiaotong University (48):