Convergence rates and asymptotic standard errors for Markov chain Monte Carlo algorithms for Bayesian probit regression

被引:56
|
作者
Roy, Vivekananda [1 ]
Hobert, James P. [1 ]
机构
[1] Univ Florida, Dept Stat, Gainesville, FL 32611 USA
关键词
asymptotic variance; central limit theorem; data augmentation algorithm; geometric ergodicity; minorization condition; PX-DA algorithm; regeneration; reversible Markov chain;
D O I
10.1111/j.1467-9868.2007.00602.x
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Consider a probit regression problem in which Y1,..., Y-n are independent Bernoulli random variables such that Pr(Y-j = 1) = Phi(x(i)(T) beta) where x(i) is a p-dimensional vector of known covariates that are associated with Yj, 0 is' a p-dimensional vector of unknown regression coefficients and Phi(.) denotes the standard normal distribution function. We study Markov chain Monte Carlo algorithms for exploring the intractable posterior density that results when the probit regression likelihood is combined with a flat prior on beta. We prove that Albert and Chib's data augmentation algorithm and Liu and Wu's PX-DA algorithm both converge at a geometric rate, which ensures the existence of central limit theorems for ergodic averages under a second-moment condition. Although these two algorithms are essentially equivalent in terms of computational complexity, results of Hobert and Marchev imply that the PX-DA algorithm is theoretically more efficient in the sense that the asymptotic variance in the central limit theorem under the PX-DA algorithm is no larger than that under Albert and Chib's algorithm. We also construct minorization conditions that allow us to exploit regenerative simulation techniques for the consistent estimation of asymptotic variances. As an illustration, we apply our results to van Dyk and Meng's lupus data. This example demonstrates that huge gains in efficiency are possible by using the PX-DA algorithm instead of Albert and Chib's algorithm.
引用
收藏
页码:607 / 623
页数:17
相关论文
共 50 条