STOCHASTIC ALGORITHMS AND BAYESIAN-INFERENCE

被引:0
|
作者
GREEN, PJ [1 ]
机构
[1] UNIV BRISTOL,DEPT MATH,BRISTOL BS8 1TW,AVON,ENGLAND
来源
STATISTICIAN | 1992年 / 41卷 / 03期
关键词
D O I
10.2307/2348564
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Complex stochastic systems, large collections of random variables with non-trivial dependence structure, arise in probability modelling in many contexts. Examples include statistical mechanics, geographical epidemiology, pedigrees in genetics, statistical image analysis, and general multi-parameter Bayesian inference. In these contexts, we usually require the marginal distribution of some of the variables, or perhaps its expectation. However, in practice, such computations may not be amenable to either exact numerical calculation or direct simulation, because of the large numbers of variables that may be involved. An appealing alternative approach, which is currently receiving much attention, is the use of dynamic Monte Carlo methods such as the Gibbs sampler (Geman & Geman, 1984). This is an iterative simulation technique, in which each variable is considered in tum: its current value is discarded and replaced by a random number drawn from the conditional distribution of that variable given the current values of all the others. This process is repeated indefinitely: it is easy to see that it defines a Markov chain that converges to the required distribution. We show that this is just one of a family of Markov chains with the same properties, known as Metropolis methods and considered in general terms by Hastings (1970). Different members of this family perform very differently, in terms both of the speed of convergence and of the precision of the resulting estimators of the quantities of interest. The latter aspect of performance can be much improved by a generalization of the idea of antithetic variables. As well as being much cheaper to compute (because they avoid the need to sample from complicated conditional distributions), some Metropolis/Hastings samplers perform much better than the Gibbs sampler in the same situation. Green and Han (1991) discuss these ideas in greater detail, give examples, and also report on some extensive experiments with these methods as applied to an example from Bayesian image analysis.
引用
收藏
页码:363 / 363
页数:1
相关论文
共 50 条