On convergence properties of the EM algorithm for gaussian mixtures

被引:453
|
作者
Xu, L [1 ]
Jordan, MI [1 ]
机构
[1] CHINESE UNIV HONG KONG, DEPT COMP SCI, HONG KONG, HONG KONG
关键词
D O I
10.1162/neco.1996.8.1.129
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We build up the mathematical connection between the ''Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of gaussian mixture models.
引用
收藏
页码:129 / 151
页数:23
相关论文
共 50 条