Direct Error Rate Minimization of Hidden Markov Models

被引:0
|
作者
Keshet, Joseph [1 ]
Cheng, Chih-Chieh [2 ]
Stoehr, Mark [3 ]
McAllester, David [1 ]
机构
[1] TTI Chicago, Chicago, IL 60637 USA
[2] Univ Calif San Diego, Dept Comp Sci & Engn, San Diego, CA 94607 USA
[3] Univ Chicago, Dept Comp Sci, Chicago, IL 60637 USA
来源
12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5 | 2011年
基金
美国国家科学基金会;
关键词
hidden Markov models; online learning; direct error minimization; discriminative training; automatic speech recognition; minimum phone error; minimum frame error;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We explore discriminative training of HMM parameters that directly minimizes the expected error rate. In discriminative training one is interested in training a system to minimize a desired error function, like word error rate, phone error rate, or frame error rate. We review a recent method (McAllester, Hazan and Keshet, 2010), which introduces an analytic expression for the gradient of the expected error-rate. The analytic expression leads to a perceptron-like update rule, which is adapted here for training of HMMs in an online fashion: While the proposed method can work with any type of the error function used in speech recognition, we evaluated it on phoneme recognition of TIMIT, when the desired error function used for training was frame error rate. Except for the case of GMM with a single mixture per state, the proposed update rule provides lower error rates, both in terms of frame error rate and phone error rate, than other approaches, including MCE and large margin.
引用
收藏
页码:456 / +
页数:2
相关论文
共 50 条