A large deviation principle for the empirical measures of Metropolis-Hastings chains

被引:0
|
作者
Milinanni, Federica [1 ]
Nyquist, Pierre [2 ,3 ]
机构
[1] KTH Royal Inst Technol, Dept Math, S-10044 Stockholm, Sweden
[2] Chalmers Univ Technol, Dept Math Sci, S-41296 Gothenburg, Sweden
[3] Univ Gothenburg, Dept Math Sci, S-41296 Gothenburg, Sweden
基金
瑞典研究理事会;
关键词
Large deviations; Empirical measure; Markov chain Monte Carlo; Metropolis-Hastings; MARKOV PROCESS EXPECTATIONS; MONTE-CARLO; ASYMPTOTIC EVALUATION; CONVERGENCE-RATES; SPECTRAL THEORY; LIMIT-THEOREMS; VARIANCE;
D O I
10.1016/j.spa.2023.104293
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
To sample from a given target distribution, Markov chain Monte Carlo (MCMC) sampling relies on constructing an ergodic Markov chain with the target distribution as its invariant measure. For any MCMC method, an important question is how to evaluate its efficiency. One approach is to consider the associated empirical measure and how fast it converges to the stationary distribution of the underlying Markov process. Recently, this question has been considered from the perspective of large deviation theory, for different types of MCMC methods, including, e.g., non -reversible Metropolis-Hastings on a finite state space, non -reversible Langevin samplers, the zig-zag sampler, and parallel tempering. This approach, based on large deviations, has proven successful in analysing existing methods and designing new, efficient ones. However, for the Metropolis-Hastings algorithm on more general state spaces, the workhorse of MCMC sampling, the same techniques have not been available for analysing performance, as the underlying Markov chain dynamics violate the conditions used to prove existing large deviation results for empirical measures of a Markov chain. This also extends to methods built on the same idea as Metropolis-Hastings, such as the Metropolis-Adjusted Langevin Method or ABC-MCMC. In this paper, we take the first steps towards such a large-deviations based analysis of Metropolis- Hastings -like methods, by proving a large deviation principle for the empirical measures of Metropolis-Hastings chains. In addition, we also characterize the rate function and its properties in terms of the acceptance- and rejection-part of the Metropolis-Hastings dynamics.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] On the Poisson equation for Metropolis-Hastings chains
    Mijatovic, Aleksandar
    Vogrinc, Jure
    BERNOULLI, 2018, 24 (03) : 2401 - 2428
  • [2] Perfect sampling from independent Metropolis-Hastings chains
    Corcoran, JN
    Tweedie, RL
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2002, 104 (02) : 297 - 314
  • [3] Metropolis-Hastings reversiblizations of non-reversible Markov chains
    Choi, Michael C. H.
    STOCHASTIC PROCESSES AND THEIR APPLICATIONS, 2020, 130 (02) : 1041 - 1073
  • [4] Metropolis-Hastings via Classification
    Kaji, Tetsuya
    Rockova, Veronika
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2023, 118 (544) : 2533 - 2547
  • [5] Kernel Adaptive Metropolis-Hastings
    Sejdinovic, Dino
    Strathmann, Heiko
    Garcia, Maria Lomeli
    Andrieu, Christophe
    Gretton, Arthur
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 1665 - 1673
  • [6] On adaptive Metropolis-Hastings methods
    Griffin, Jim E.
    Walker, Stephen G.
    STATISTICS AND COMPUTING, 2013, 23 (01) : 123 - 134
  • [7] A history of the Metropolis-Hastings algorithm
    Hitchcock, DB
    AMERICAN STATISTICIAN, 2003, 57 (04): : 254 - 257
  • [8] UNDERSTANDING THE METROPOLIS-HASTINGS ALGORITHM
    CHIB, S
    GREENBERG, E
    AMERICAN STATISTICIAN, 1995, 49 (04): : 327 - 335
  • [9] The Implicit Metropolis-Hastings Algorithm
    Neklyudov, Kirill
    Egorov, Evgenii
    Vetrov, Dmitry
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] ADAPTIVE INDEPENDENT METROPOLIS-HASTINGS
    Holden, Lars
    Hauge, Ragnar
    Holden, Marit
    ANNALS OF APPLIED PROBABILITY, 2009, 19 (01): : 395 - 413