Convergence control methods for Markov chain Monte Carlo algorithms

被引:68
|
作者
Robert, CP
机构
[1] Statistics Laboratory, CREST-INSEE
关键词
Gibbs sampling; Metropolis algorithm; central limit theorem; asymptotic variance; renewal theory; duality principle; finite state Markov chains; missing data; ergodic theorem; Rao-Blackwellization; importance sampling; trapezoidal integration;
D O I
10.1214/ss/1177009937
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Markov chain Monte Carlo methods have been increasingly popular since their introduction by Gelfand and Smith. However, while the breadth and variety of Markov chain Monte Carlo applications are properly astounding, progress in the control of convergence for these algorithms has been slow, despite its relevance in practical implementations. We present here different approaches toward this goal based on functional and mixing theories, while paying particular attention to the central limit theorem and to the approximation of the limiting variance. Renewal theory in the spirit of Mykland, Tierney and Yu is presented as the most promising technique in this regard, and we illustrate its potential in several examples. In addition, we stress that many strong convergence properties can be derived from the study of simple subchains which are produced by Markov chain Monte Carlo algorithms, due to a duality principle obtained in Diebolt and Robert for mixture estimation. We show here the generality of this principle which applies, for instance, to most missing data models. A more empirical stopping rule for Markov chain Monte Carlo algorithms is related to the simultaneous convergence of different estimators of the quantity of interest. Besides the regular ergodic average,we propose the Rao-Blackwellized version as well as estimates based on importance sampling and trapezoidal approximations of the integrals.
引用
收藏
页码:231 / 253
页数:23
相关论文
共 50 条