CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS

被引:61
|
作者
Fort, G. [1 ]
Moulines, E. [1 ]
Priouret, P. [2 ]
机构
[1] TELECOM ParisTech CNRS, LTCI, F-75634 Paris 13, France
[2] Univ Paris 06, LPMA, F-75252 Paris 05, France
来源
ANNALS OF STATISTICS | 2011年 / 39卷 / 06期
关键词
Markov chains; Markov chain Monte Carlo; adaptive Monte Carlo; ergodic theorems; law of large numbers; adaptive Metropolis; equi-energy sampler; parallel tempering; interacting tempering; EQUI-ENERGY SAMPLER; LIMIT-THEOREMS; ERGODICITY; HASTINGS; RATES;
D O I
10.1214/11-AOS938
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Adaptive and interacting Markov chain Monte Carlo algorithms (MCMC) have been recently introduced in the literature. These novel simulation algorithms are designed to increase the simulation efficiency to sample complex distributions. Motivated by some recently introduced algorithms (such as the adaptive Metropolis algorithm and the interacting tempering algorithm), we develop a general methodological and theoretical framework to establish both the convergence of the marginal distribution and a strong law of large numbers. This framework weakens the conditions introduced in the pioneering paper by Roberts and Rosenthal [J. Appl. Probab. 44 (2007) 458-475]. It also covers the case when the target distribution p is sampled by using Markov transition kernels with a stationary distribution that differs from p.
引用
收藏
页码:3262 / 3289
页数:28
相关论文
共 50 条