Estimation Methods for Mixed Logistic Models with Few Clusters

被引:23
|
作者
McNeish, Daniel [1 ,2 ]
机构
[1] Univ Utrecht, Utrecht, Netherlands
[2] Univ Maryland, College Pk, MD USA
关键词
Mutilevel logistic regression; small sample; hierarchical generalized linear model; GENERALIZED LINEAR-MODELS; MULTILEVEL MODELS; SAMPLE-SIZE; LONGITUDINAL DATA; BIAS CORRECTION; LIKELIHOOD; QUADRATURE; INFERENCE; APPROXIMATIONS; ROBUSTNESS;
D O I
10.1080/00273171.2016.1236237
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
For mixed models generally, it is well known that modeling data with few clusters will result in biased estimates, particularly of the variance components and fixed effect standard errors. In linear mixed models, small sample bias is typically addressed through restricted maximum likelihood estimation (REML) and a Kenward-Roger correction. Yet with binary outcomes, there is no direct analog of either procedure. With a larger number of clusters, estimation methods for binary outcomes that approximate the likelihood to circumvent the lack of a closed form solution such as adaptive Gaussian quadrature and the Laplace approximation have been shown to yield less-biased estimates than linearization estimation methods that instead linearly approximate the model. However, adaptive Gaussian quadrature and the Laplace approximation are approximating the full likelihood rather than the restricted likelihood; the full likelihood is known to yield biased estimates with few clusters. On the other hand, linearization methods linearly approximate the model, which allows for restricted maximum likelihood and the Kenward-Roger correction to be applied. Thus, the following question arises: Which is preferable, a better approximation of a biased function or a worse approximation of an unbiased function? We address this question with a simulation and an illustrative empirical analysis.
引用
收藏
页码:790 / 804
页数:15
相关论文
共 50 条