Approximate leave-future-out cross-validation for Bayesian time series models

被引:54
|
作者
Burkner, Paul-Christian [1 ]
Gabry, Jonah [2 ,3 ]
Vehtari, Aki [1 ]
机构
[1] Aalto Univ, Dept Comp Sci, Konemiehentie 2, Espoo 02150, Finland
[2] Columbia Univ, Appl Stat Ctr, New York, NY USA
[3] Columbia Univ, ISERP, New York, NY USA
基金
芬兰科学院;
关键词
Time series analysis; cross-Validation; Bayesian inference; pareto Smoothed importance sampling; R PACKAGE;
D O I
10.1080/00949655.2020.1783262
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
One of the common goals of time series analysis is to use the observed series to inform predictions for future observations. In the absence of any actual new data to predict, cross-validation can be used to estimate a model's future predictive accuracy, for instance, for the purpose of model comparison or selection. Exact cross-validation for Bayesian models is often computationally expensive, but approximate cross-validation methods have been developed, most notably methods for leave-one-out cross-validation (LOO-CV). If the actual prediction task is to predict the future given the past, LOO-CV provides an overly optimistic estimate because the information from future observations is available to influence predictions of the past. To properly account for the time series structure, we can use leave-future-out cross-validation (LFO-CV). Like exact LOO-CV, exact LFO-CV requires refitting the model many times to different subsets of the data. Using Pareto smoothed importance sampling, we propose a method for approximating exact LFO-CV that drastically reduces the computational costs while also providing informative diagnostics about the quality of the approximation.
引用
收藏
页码:2499 / 2523
页数:25
相关论文
共 50 条
  • [41] Estimating MLP generalisation ability without a test set using fast, approximate leave-one-out cross-validation
    Myles, AJ
    Murray, AF
    Wallace, AR
    Barnard, J
    Smith, G
    NEURAL COMPUTING & APPLICATIONS, 1997, 5 (03): : 134 - 151
  • [42] Estimating MLP generalisation ability without a test set using fast, approximate leave-one-out cross-validation
    Andrew J. Myles
    Alan F. Murray
    A. Robin Wallace
    John Barnard
    Gordon Smith
    Neural Computing & Applications, 1997, 5 : 134 - 151
  • [43] Markov cross-validation for time series model evaluations
    Jiang, Gaoxia
    Wang, Wenjian
    INFORMATION SCIENCES, 2017, 375 : 219 - 233
  • [44] On the use of cross-validation for time series predictor evaluation
    Bergmeir, Christoph
    Benitez, Jose M.
    INFORMATION SCIENCES, 2012, 191 : 192 - 213
  • [45] Approximate Bayesian Computation for a Class of Time Series Models
    Jasra, Ajay
    INTERNATIONAL STATISTICAL REVIEW, 2015, 83 (03) : 405 - 435
  • [46] Model averaging based on leave-subject-out cross-validation for vector autoregressions
    Liao, Jun
    Zong, Xianpeng
    Zhang, Xinyu
    Zou, Guohua
    JOURNAL OF ECONOMETRICS, 2019, 209 (01) : 35 - 60
  • [47] Kriging Model Averaging Based on Leave-One-Out Cross-Validation Method
    Feng, Ziheng
    Zong, Xianpeng
    Xie, Tianfa
    Zhang, Xinyu
    JOURNAL OF SYSTEMS SCIENCE & COMPLEXITY, 2024, 37 (05) : 2132 - 2156
  • [48] A fast leave-one-out cross-validation for SVM-like family
    Jingxiang Zhang
    Shitong Wang
    Neural Computing and Applications, 2016, 27 : 1717 - 1730
  • [49] Algebraic shortcuts for leave-one-out cross-validation in supervised network inference
    Stock, Michiel
    Pahikkala, Tapio
    Airola, Antti
    Waegeman, Willem
    De Baets, Bernard
    BRIEFINGS IN BIOINFORMATICS, 2020, 21 (01) : 262 - 271
  • [50] Tournament leave-pair-out cross-validation for receiver operating characteristic analysis
    Perez, Ileana Montoya
    Airola, Antti
    Bostrom, Peter J.
    Jambor, Ivan
    Pahikkala, Tapio
    STATISTICAL METHODS IN MEDICAL RESEARCH, 2019, 28 (10-11) : 2975 - 2991