Learning mixture models with the regularized latent maximum entropy principle

被引:3
|
作者
Wang, SJ [1 ]
Schuurmans, D
Peng, FC
Zhao, YX
机构
[1] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2E8, Canada
[2] Univ Massachusetts, Dept Comp Sci, Amherst, MA 01003 USA
[3] Univ Missouri, Dept Comp Engn & Comp Sci, Columbia, MO 65201 USA
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2004年 / 15卷 / 04期
关键词
expectation maximization (EM); iterative scaling; latent variables; maximum entropy; mixture models; regularization;
D O I
10.1109/TNN.2004.828755
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a new approach to estimating mixture models based on a recent inference principle we have proposed: the latent maximum entropy principle (LME). LME is different from Jaynes' maximum entropy principle, standard maximum likelihood, and maximum a posteriori probability estimation. We demonstrate the LME principle by deriving new algorithms for mixture model estimation, and show how robust new variants of the expectation maximization (EM) algorithm can be developed. We show that a regularized version of LME (RLME), is effective at estimating mixture models. It generally yields better results than plain LME, which in turn is often better than maximum likelihood and maximum a posterior estimation, particularly when inferring latent variable models from small amounts of data.
引用
收藏
页码:903 / 916
页数:14
相关论文
共 50 条
  • [1] The latent maximum entropy principle
    Department of Computer Science and Engineering, Wright State University, Dayton, OH 45435, United States
    不详
    不详
    [J]. ACM Trans. Knowl. Discov. Data, 2
  • [2] The Latent Maximum Entropy Principle
    Wang, Shaojun
    Schuurmans, Dale
    Zhao, Yunxin
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2012, 6 (02)
  • [3] The latent maximum entropy principle
    Wang, SJ
    Rosenfeld, R
    Zhao, YX
    Schuurmans, D
    [J]. ISIT: 2002 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, PROCEEDINGS, 2002, : 131 - 131
  • [4] Combining statistical language models via the latent maximum entropy principle
    Wang, SJ
    Schuurmans, D
    Peng, FC
    Zhao, YX
    [J]. MACHINE LEARNING, 2005, 60 (1-3) : 229 - 250
  • [5] Combining Statistical Language Models via the Latent Maximum Entropy Principle
    Shaojun Wang
    Dale Schuurmans
    Fuchun Peng
    Yunxin Zhao
    [J]. Machine Learning, 2005, 60 : 229 - 250
  • [6] Compositional models and maximum entropy principle
    Jirousek, Radim
    Malec, Miroslav
    [J]. PROCEEDINGS OF THE SIXTH INTERNATIONAL CONFERENCE ON INFORMATION AND MANAGEMENT SCIENCES, 2007, 6 : 589 - 595
  • [7] Latent maximum entropy principle for statistical language modeling
    Wang, SJ
    Rosenfeld, R
    Zhao, YX
    [J]. ASRU 2001: IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING, CONFERENCE PROCEEDINGS, 2001, : 182 - 185
  • [8] Towards systematic approach to boundary conditions in mixture and multiphasic incompressible models: Maximum Entropy principle estimate
    Klika, Vaclav
    Votinska, Barbora
    [J]. INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE, 2023, 191
  • [9] Quantum hydrodynamic models from a maximum entropy principle
    Trovato, M.
    Reggiani, L.
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND THEORETICAL, 2010, 43 (10)
  • [10] Maximum entropy principle and hydrodynamic models in statistical mechanics
    Trovato, M.
    Reggiani, L.
    [J]. RIVISTA DEL NUOVO CIMENTO, 2012, 35 (3-4): : 99 - 266