Kriging Model Averaging Based on Leave-One-Out Cross-Validation Method

被引:0
|
作者
FENG Ziheng [1 ]
ZONG Xianpeng [1 ]
XIE Tianfa [1 ]
ZHANG Xinyu [2 ]
机构
[1] School of Mathematics, Statistics and Mechanics, Beijing University of Technology
[2] Academy of Mathematics and Systems Science, Chinese Academy of
关键词
D O I
暂无
中图分类号
O212 [数理统计];
学科分类号
摘要
In recent years, Kriging model has gained wide popularity in various fields such as space geology, econometrics, and computer experiments. As a result, research on this model has proliferated.In this paper, the authors propose a model averaging estimation based on the best linear unbiased prediction of Kriging model and the leave-one-out cross-validation method, with consideration for the model uncertainty. The authors present a weight selection criterion for the model averaging estimation and provide two theoretical justifications for the proposed method. First, the estimated weight based on the proposed criterion is asymptotically optimal in achieving the lowest possible prediction risk.Second, the proposed method asymptotically assigns all weights to the correctly specified models when the candidate model set includes these models. The effectiveness of the proposed method is verified through numerical analyses.
引用
收藏
页码:2132 / 2156
页数:25
相关论文
共 50 条
  • [1] Kriging Model Averaging Based on Leave-One-Out Cross-Validation Method
    Feng, Ziheng
    Zong, Xianpeng
    Xie, Tianfa
    Zhang, Xinyu
    JOURNAL OF SYSTEMS SCIENCE & COMPLEXITY, 2024, 37 (05) : 2132 - 2156
  • [2] Enhanced Kriging leave-one-out cross-validation in improving model estimation and optimization
    Pang, Yong
    Wang, Yitang
    Lai, Xiaonan
    Zhang, Shuai
    Liang, Pengwei
    Song, Xueguan
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2023, 414
  • [3] Leave-One-Out Cross-Validation Based Model Selection for Manifold Regularization
    Yuan, Jin
    Li, Yan-Ming
    Liu, Cheng-Liang
    Zha, Xuan F.
    ADVANCES IN NEURAL NETWORKS - ISNN 2010, PT 1, PROCEEDINGS, 2010, 6063 : 457 - +
  • [4] Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection
    Gronau Q.F.
    Wagenmakers E.-J.
    Computational Brain & Behavior, 2019, 2 (1) : 1 - 11
  • [5] Leave-one-out cross-validation is risk consistent for lasso
    Darren Homrighausen
    Daniel J. McDonald
    Machine Learning, 2014, 97 : 65 - 78
  • [6] Leave-One-Out Cross-Validation for Bayesian Model Comparison in Large Data
    Magnusson, Mans
    Andersen, Michael Riis
    Jonasson, Johan
    Vehtari, Aki
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 341 - 350
  • [7] Leave-one-out cross-validation is risk consistent for lasso
    Homrighausen, Darren
    McDonald, Daniel J.
    MACHINE LEARNING, 2014, 97 (1-2) : 65 - 78
  • [8] Bayesian Leave-One-Out Cross-Validation for Large Data
    Magnusson, Mans
    Andersen, Michael Riis
    Jonasson, Johan
    Vehtari, Aki
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [9] Automatic cross-validation in structured models: Is it time to leave out leave-one-out?
    Adin, Aritz
    Krainski, Elias Teixeira
    Lenzi, Amanda
    Liu, Zhedong
    Martinez-Minaya, Joaquin
    Rue, Havard
    SPATIAL STATISTICS, 2024, 62
  • [10] Limitations of “Limitations of Bayesian Leave-one-out Cross-Validation for Model Selection”
    Vehtari A.
    Simpson D.P.
    Yao Y.
    Gelman A.
    Computational Brain & Behavior, 2019, 2 (1) : 22 - 27