Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo

被引:0
|
作者
Havasi, Marton [1 ]
Hernandez-Lobato, Jose Miguel [1 ,2 ,3 ]
Jose Murillo-Fuentes, Juan [4 ]
机构
[1] Univ Cambridge, Dept Engn, Cambridge, England
[2] Microsoft Res, Cambridge, England
[3] Alan Turing Inst, London, England
[4] Univ Seville, Dept Signal Theory & Commun, Seville, Spain
基金
英国工程与自然科学研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Gaussian Processes (DGPs) are hierarchical generalizations of Gaussian Processes that combine well calibrated uncertainty estimates with the high flexibility of multilayer models. One of the biggest challenges with these models is that exact inference is intractable. The current state-of-the-art inference method, Variational Inference (VI), employs a Gaussian approximation to the posterior distribution. This can be a potentially poor unimodal approximation of the generally multimodal posterior. In this work, we provide evidence for the non-Gaussian nature of the posterior and we apply the Stochastic Gradient Hamiltonian Monte Carlo method to generate samples. To efficiently optimize the hyperparameters, we introduce the Moving Window MCEM algorithm. This results in significantly better predictions at a lower computational cost than its VI counterpart. Thus our method establishes a new state-of-the-art for inference in DGPs.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference
    Li, Zhize
    Zhang, Tianyi
    Cheng, Shuyu
    Zhu, Jun
    Li, Jian
    [J]. MACHINE LEARNING, 2019, 108 (8-9) : 1701 - 1727
  • [2] Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference
    Zhize Li
    Tianyi Zhang
    Shuyu Cheng
    Jun Zhu
    Jian Li
    [J]. Machine Learning, 2019, 108 : 1701 - 1727
  • [3] Stochastic Gradient Hamiltonian Monte Carlo
    Chen, Tianqi
    Fox, Emily B.
    Guestrin, Carlos
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 1683 - 1691
  • [4] A Hybrid Stochastic Gradient Hamiltonian Monte Carlo Method
    Zhang, Chao
    Li, Zhijian
    Shen, Zebang
    Xie, Jiahao
    Qian, Hui
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10842 - 10850
  • [5] Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian Monte Carlo
    Gurbuzbalaban, Mert
    Gao, Xuefeng
    Hu, Yuanhan
    Zhu, Lingjiong
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [6] Decentralized stochastic gradient langevin dynamics and hamiltonian Monte Carlo
    Gürbüzbalaban, Mert
    Gao, Xuefeng
    Hu, Yuanhan
    Zhu, Lingjiong
    [J]. Journal of Machine Learning Research, 2021, 22
  • [7] Deep Gaussian Processes Using Expectation Propagation and Monte Carlo Methods
    Hernandez-Munoz, Gonzalo
    Villacampa-Calvo, Carlos
    Hernandez-Lobato, Daniel
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT III, 2021, 12459 : 479 - 494
  • [8] Bayesian inference for a single factor copula stochastic volatility model using Hamiltonian Monte Carlo
    Kreuzer, Alexander
    Czado, Claudia
    [J]. ECONOMETRICS AND STATISTICS, 2021, 19 : 130 - 150
  • [9] Stochastic Gradient Hamiltonian Monte Carlo for non-convex learning
    Chau, Huy N.
    Rasonyi, Miklos
    [J]. STOCHASTIC PROCESSES AND THEIR APPLICATIONS, 2022, 149 : 341 - 368
  • [10] Stochastic Gradient Hamiltonian Monte Carlo Methods with Recursive Variance Reduction
    Zou, Difan
    Xu, Pan
    Gu, Quanquan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32