Equivalence of Generative and Log-Linear Models

被引:27
|
作者
Heigold, Georg [1 ]
Ney, Hermann [1 ]
Lehnen, Patrick [1 ]
Gass, Tobias [1 ]
Schlueter, Ralf [1 ]
机构
[1] Rhein Westfal TH Aachen, Dept Comp Sci, Chair Comp Sci 6, D-52056 Aachen, Germany
关键词
Conditional random field (CRF); Gaussian mixture model (GMM); hidden Markov model (HMM); log-linear model;
D O I
10.1109/TASL.2010.2082532
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Conventional speech recognition systems are based on hidden Markov models (HMMs) with Gaussian mixture models (GHMMs). Discriminative log-linear models are an alternative modeling approach and have been investigated recently in speech recognition. GHMMs are directed models with constraints, e.g., positivity of variances and normalization of conditional probabilities, while log-linear models do not use such constraints. This paper compares the posterior form of typical generative models related to speech recognition with their log-linear model counterparts. The key result will be the derivation of the equivalence of these two different approaches under weak assumptions. In particular, we study Gaussian mixture models, part-of-speech bigram tagging models, and eventually, the GHMMs. This result unifies two important but fundamentally different modeling paradigms in speech recognition on the functional level. Furthermore, this paper will present comparative experimental results for various speech tasks of different complexity, including a digit string and large-vocabulary continuous speech recognition tasks.
引用
收藏
页码:1138 / 1148
页数:11
相关论文
共 50 条