Using Bayesian Leave-One-Out and Leave-Future-Out Cross-Validation to Evaluate the Performance of Rate-Time Models to Forecast Production of Tight-Oil Wells

被引:0
|
作者
Maraggi L.M.R. [1 ]
Lake L.W. [1 ]
Walsh M.P. [1 ]
机构
[1] The University of Texas at Austin, United States
来源
关键词
D O I
10.2118/209234-PA
中图分类号
学科分类号
摘要
Production forecasting is usually performed by applying a single model from a classical statistical standpoint (point estimation). This approach neglects: (a) model uncertainty and (b) quantification of uncertainty of the model's estimates. This work evaluates the predictive accuracy of rate-Time models to forecast production from tight-oil wells using Bayesian methods. We apply Bayesian leave-one-out (LOO) and leave-future-out (LFO) cross-validation (CV) using an accuracy metric that evaluates the uncertainty of the models' estimates: the expected log predictive density (elpd). We illustrate the application of the procedure to tight-oil wells of west Texas. This work assesses the predictive accuracy of rate-Time models to forecast production of tight-oil wells. We use two empirical models, the Arps hyperbolic and logistic growth models, and two physics-based models: scaled slightly compressible single-phase and scaled two-phase (oil and gas) solutions of the diffusivity equation. First, we perform Bayesian inference to generate probabilistic production forecasts for each model using a Bayesian workflow in which we assess the convergence of the Markov chain Monte Carlo (MCMC) algorithm, calibrate, and evaluate the robustness of the models' inferences. Second, we evaluate the predictive accuracy of the models using the elpd accuracy metric. This metric provides a measure of out-of-sample predictive performance. We apply two different CV techniques: LOO and LFO. The results of this study are the following. First, we evaluate the predictive performance of the models using the elpd accuracy metric, which accounts for the uncertainty of the models' estimates assessing distributions instead of point estimates. Second, we perform fast CV calculations using an important sampling technique to evaluate and compare the results of the application of two CV techniques: leave-one-out cross-validation (LOO-CV) and leave-future-out cross-validation (LFO-CV). While the goal of LOO-CV is to evaluate the models' ability to accurately resemble the structure of the production data, LFO-CV aims to assess the models' capacity to predict future-Time production (honoring the time-dependent structure of the data). Despite the difference in their prediction goals, both methods yield similar results on the set of tight-oil wells under study. The logistic growth model yields the best predictive performance for most of the wells in the data set, followed by the two-phase physics-based flow model. This work shows the application of new tools to evaluate the predictive accuracy of models used to forecast production of tight-oil wells using: (a) an accuracy metric that accounts for the uncertainty of the models' estimates and (b) fast computation of two CV techniques, LOO-CV and LFO-CV. To our knowledge, the proposed approach is novel and suitable to evaluate and eventually select the rate-Time model(s) with the best predictive accuracy of models to forecast hydrocarbon production. © 2022 The Authors.
引用
收藏
页码:730 / 750
页数:20
相关论文
共 26 条
  • [1] Using Bayesian Leave- One- Out and Leave-Future-Out Cross-Validation to Evaluate the Performance of Rate- Time Models to Forecast Production of Tight-Oil Wells
    Maraggi, Leopoldo M. Ruiz
    Lake, Larry W.
    Walsh, Mark P.
    SPE RESERVOIR EVALUATION & ENGINEERING, 2022, 25 (04) : 730 - 750
  • [2] Approximate leave-future-out cross-validation for Bayesian time series models
    Burkner, Paul-Christian
    Gabry, Jonah
    Vehtari, Aki
    JOURNAL OF STATISTICAL COMPUTATION AND SIMULATION, 2020, : 2499 - 2523
  • [3] Automatic cross-validation in structured models: Is it time to leave out leave-one-out?
    Adin, Aritz
    Krainski, Elias Teixeira
    Lenzi, Amanda
    Liu, Zhedong
    Martinez-Minaya, Joaquin
    Rue, Havard
    SPATIAL STATISTICS, 2024, 62
  • [4] Bayesian Leave-One-Out Cross-Validation for Large Data
    Magnusson, Mans
    Andersen, Michael Riis
    Jonasson, Johan
    Vehtari, Aki
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [5] Rejoinder: More Limitations of Bayesian Leave-One-Out Cross-Validation
    Gronau Q.F.
    Wagenmakers E.-J.
    Computational Brain & Behavior, 2019, 2 (1) : 35 - 47
  • [6] Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection
    Gronau Q.F.
    Wagenmakers E.-J.
    Computational Brain & Behavior, 2019, 2 (1) : 1 - 11
  • [7] Robust Leave-One-Out Cross-Validation for High-Dimensional Bayesian Models
    Silva, Luca Alessandro
    Zanella, Giacomo
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2024, 119 (547) : 2369 - 2381
  • [8] Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC
    Vehtari, Aki
    Gelman, Andrew
    Gabry, Jonah
    STATISTICS AND COMPUTING, 2017, 27 (05) : 1413 - 1432
  • [9] Leave-One-Out Cross-Validation for Bayesian Model Comparison in Large Data
    Magnusson, Mans
    Andersen, Michael Riis
    Jonasson, Johan
    Vehtari, Aki
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 341 - 350
  • [10] Limitations of “Limitations of Bayesian Leave-one-out Cross-Validation for Model Selection”
    Vehtari A.
    Simpson D.P.
    Yao Y.
    Gelman A.
    Computational Brain & Behavior, 2019, 2 (1) : 22 - 27