Split Hamiltonian Monte Carlo

被引:34
|
作者
Shahbaba, Babak [1 ,2 ]
Lan, Shiwei [1 ]
Johnson, Wesley O. [1 ]
Neal, Radford M. [3 ,4 ]
机构
[1] Univ Calif Irvine, Dept Stat, Irvine, CA 92697 USA
[2] Univ Calif Irvine, Dept Comp Sci, Irvine, CA 92697 USA
[3] Univ Toronto, Dept Stat, Toronto, ON M5S 3G3, Canada
[4] Univ Toronto, Dept Comp Sci, Toronto, ON M5S 3G3, Canada
基金
美国国家科学基金会; 加拿大自然科学与工程研究理事会;
关键词
Markov chain Monte Carlo; Hamiltonian dynamics; Bayesian analysis;
D O I
10.1007/s11222-012-9373-1
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We show how the Hamiltonian Monte Carlo algorithm can sometimes be speeded up by "splitting" the Hamiltonian in a way that allows much of the movement around the state space to be done at low computational cost. One context where this is possible is when the log density of the distribution of interest (the potential energy function) can be written as the log of a Gaussian density, which is a quadratic function, plus a slowly-varying function. Hamiltonian dynamics for quadratic energy functions can be analytically solved. With the splitting technique, only the slowly-varying part of the energy needs to be handled numerically, and this can be done with a larger stepsize (and hence fewer steps) than would be necessary with a direct simulation of the dynamics. Another context where splitting helps is when the most important terms of the potential energy function and its gradient can be evaluated quickly, with only a slowly-varying part requiring costly computations. With splitting, the quick portion can be handled with a small stepsize, while the costly portion uses a larger stepsize. We show that both of these splitting approaches can reduce the computational cost of sampling from the posterior distribution for a logistic regression model, using either a Gaussian approximation centered on the posterior mode, or a Hamiltonian split into a term that depends on only a small number of critical cases, and another term that involves the larger number of cases whose influence on the posterior distribution is small.
引用
收藏
页码:339 / 349
页数:11
相关论文
共 50 条
  • [21] Stochastic Gradient Hamiltonian Monte Carlo
    Chen, Tianqi
    Fox, Emily B.
    Guestrin, Carlos
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 1683 - 1691
  • [22] Reflection, Refraction, and Hamiltonian Monte Carlo
    Afshar, Hadi Mohasel
    Domke, Justin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [23] Quantum dynamical Hamiltonian Monte Carlo
    Lockwood, Owen
    Weiss, Peter
    Aronshtein, Filip
    Verdon, Guillaume
    PHYSICAL REVIEW RESEARCH, 2024, 6 (03):
  • [24] Stochastic approximation Hamiltonian Monte Carlo
    Yun, Jonghyun
    Shin, Minsuk
    Hoon Jin, Ick
    Liang, Faming
    JOURNAL OF STATISTICAL COMPUTATION AND SIMULATION, 2020, 90 (17) : 3135 - 3156
  • [25] Unbiased Hamiltonian Monte Carlo with couplings
    Heng, J.
    Jacob, P. E.
    BIOMETRIKA, 2019, 106 (02) : 287 - 302
  • [26] Shadow Manifold Hamiltonian Monte Carlo
    van der Heide, Chris
    Hodgkinson, Liam
    Roosta, Fred
    Kroese, Dirk P.
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [27] Learning Hamiltonian Monte Carlo in R
    Thomas, Samuel
    Tu, Wanzhu
    AMERICAN STATISTICIAN, 2021, 75 (04): : 403 - 413
  • [28] Monte Carlo Hamiltonian:Inverse Potential
    LUO Xiang-Qian~1 CHENG Xiao-Ni~1 Helmut KR(?)GER~21 Department of Physics
    Communications in Theoretical Physics, 2004, 41 (04) : 509 - 512
  • [29] SpHMC: Spectral Hamiltonian Monte Carlo
    Xiong, Haoyi
    Wang, Kafeng
    Bian, Jiang
    Zhu, Zhanxing
    Xu, Cheng-Zhong
    Guo, Zhishan
    Huan, Jun
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 5516 - 5524
  • [30] Continuously tempered Hamiltonian Monte Carlo
    Graham, Matthew M.
    Storkey, Amos J.
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017), 2017,