Split Hamiltonian Monte Carlo

被引:34
|
作者
Shahbaba, Babak [1 ,2 ]
Lan, Shiwei [1 ]
Johnson, Wesley O. [1 ]
Neal, Radford M. [3 ,4 ]
机构
[1] Univ Calif Irvine, Dept Stat, Irvine, CA 92697 USA
[2] Univ Calif Irvine, Dept Comp Sci, Irvine, CA 92697 USA
[3] Univ Toronto, Dept Stat, Toronto, ON M5S 3G3, Canada
[4] Univ Toronto, Dept Comp Sci, Toronto, ON M5S 3G3, Canada
基金
美国国家科学基金会; 加拿大自然科学与工程研究理事会;
关键词
Markov chain Monte Carlo; Hamiltonian dynamics; Bayesian analysis;
D O I
10.1007/s11222-012-9373-1
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We show how the Hamiltonian Monte Carlo algorithm can sometimes be speeded up by "splitting" the Hamiltonian in a way that allows much of the movement around the state space to be done at low computational cost. One context where this is possible is when the log density of the distribution of interest (the potential energy function) can be written as the log of a Gaussian density, which is a quadratic function, plus a slowly-varying function. Hamiltonian dynamics for quadratic energy functions can be analytically solved. With the splitting technique, only the slowly-varying part of the energy needs to be handled numerically, and this can be done with a larger stepsize (and hence fewer steps) than would be necessary with a direct simulation of the dynamics. Another context where splitting helps is when the most important terms of the potential energy function and its gradient can be evaluated quickly, with only a slowly-varying part requiring costly computations. With splitting, the quick portion can be handled with a small stepsize, while the costly portion uses a larger stepsize. We show that both of these splitting approaches can reduce the computational cost of sampling from the posterior distribution for a logistic regression model, using either a Gaussian approximation centered on the posterior mode, or a Hamiltonian split into a term that depends on only a small number of critical cases, and another term that involves the larger number of cases whose influence on the posterior distribution is small.
引用
收藏
页码:339 / 349
页数:11
相关论文
共 50 条
  • [41] Neural network gradient Hamiltonian Monte Carlo
    Lingge Li
    Andrew Holbrook
    Babak Shahbaba
    Pierre Baldi
    Computational Statistics, 2019, 34 : 281 - 299
  • [42] Pseudo-marginal hamiltonian monte carlo
    Alenlöv, Johan
    Doucet, Arnaud
    Lindsten, Fredrik
    Journal of Machine Learning Research, 2021, 22 : 1 - 45
  • [43] Hamiltonian monte carlo with energy conserving subsampling
    Dang, Khue-Dung
    Quiroz, Matias
    Kohn, Robert
    Tran, Minh-Ngoc
    Villani, Mattias
    Journal of Machine Learning Research, 2019, 20
  • [44] Hamiltonian Monte Carlo Without Detailed Balance
    Sohl-Dickstein, Jascha
    Mudigonda, Mayur
    DeWeese, Michael R.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 1), 2014, 32
  • [45] New approach to Monte Carlo Hamiltonian and test
    Huang, CQ
    Luo, XQ
    Kröger, H
    HIGH ENERGY PHYSICS AND NUCLEAR PHYSICS-CHINESE EDITION, 2005, 29 (09): : 928 - 932
  • [46] Cluster Monte Carlo methods for the FePt Hamiltonian
    Lyberatos, A.
    Parker, G. J.
    JOURNAL OF MAGNETISM AND MAGNETIC MATERIALS, 2016, 400 : 266 - 270
  • [47] Fixed-Distance Hamiltonian Monte Carlo
    Afshar, Hadi Mohasel
    Cripps, Sally
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [48] Pseudo-Marginal Hamiltonian Monte Carlo
    Alenlov, Johan
    Doucet, Arnaud
    Lindsten, Fredrik
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [49] New approach to Monte Carlo Hamiltonian and test
    2005, Science Press, Beijing, China (29):
  • [50] Monte Carlo Hamiltonian from stochastic basis
    Huang, CQ
    Kröger, H
    Luo, XQ
    Moriarty, KJM
    PHYSICS LETTERS A, 2002, 299 (5-6) : 483 - 493