MLP emulation of N-gram models as a first step to connectionist language modeling

被引:0
|
作者
Castro, MJ [1 ]
Prat, F [1 ]
Casacuberta, F [1 ]
机构
[1] Univ Politecn Valencia, Dept Sistemes Informat & Computacio, Valencia, Spain
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In problems such as automatic speech recognition and machine translation, where the system response must be a sentence in a given language, language models are employed in order to improve system performance. These language models are usually N-gram models (for instance, bigram or trigram models) which are estimated from large text databases using the occurrence frequencies of these N-grams. In 1989, Nakamura and Shikano empirically showed how multilayer perceptrons can emulate trigram model predictive capabilities with additional generalization features. Our paper discusses Nakamura and Shikano's work, provides new empirical evidence on multilayer perceptron capability to emulate N-gram models, and proposes new directions for extending neural network-based language models. The experimental work we present here compares connectionist phonological bigram models with a conventional one using different measures, which include recognition performances in a Spanish acoustic-phonetic decoding task.
引用
收藏
页码:910 / 915
页数:6
相关论文
共 50 条
  • [31] Investigation on LSTM Recurrent N-gram Language Models for Speech Recognition
    Tueske, Zoltan
    Schlueter, Ralf
    Ney, Hermann
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 3358 - 3362
  • [32] PERFORMANCE ANALYSIS OF NEURAL NETWORKS IN COMBINATION WITH N-GRAM LANGUAGE MODELS
    Oparin, Ilya
    Sundermeyer, Martin
    Ney, Hermann
    Gauvain, Jean-Luc
    2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2012, : 5005 - 5008
  • [33] Semantic N-gram language modeling with the latent maximum entropy principle
    Wang, SJ
    Schuurmans, D
    Peng, FC
    Zhao, YX
    2003 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL I, PROCEEDINGS: SPEECH PROCESSING I, 2003, : 376 - 379
  • [34] A study of n-gram and decision tree letter language modeling methods
    Potamianos, G
    Jelinek, F
    SPEECH COMMUNICATION, 1998, 24 (03) : 171 - 192
  • [35] Profiling Household Appliance Electricity Usage with N-Gram Language Modeling
    Li, Daoyuan
    Bissyande, Tegawende F.
    Kubler, Sylvain
    Klein, Jacques
    Le Traon, Yves
    PROCEEDINGS 2016 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2016, : 604 - 609
  • [36] Factored bilingual n-gram language models for statistical machine translation
    Crego, Josep M.
    Yvon, Francois
    MACHINE TRANSLATION, 2010, 24 (02) : 159 - 175
  • [37] Combining naive Bayes and n-gram language models for text classification
    Peng, FC
    Schuurmans, D
    ADVANCES IN INFORMATION RETRIEVAL, 2003, 2633 : 335 - 350
  • [38] Stepwise API usage assistance using n-gram language models
    Santos, Andre L.
    Prendi, Goncalo
    Sousa, Hugo
    Ribeiro, Ricardo
    JOURNAL OF SYSTEMS AND SOFTWARE, 2017, 131 : 461 - 474
  • [39] N-gram modeling based on recognized phonemes in automatic language identification
    Kwan, H
    Hirose, K
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 1998, E81D (11) : 1224 - 1231
  • [40] N-gram Language Models in JLASER Neural Network Speech Recognizer
    Konopik, Miloslav
    Habernal, Ivan
    Brychcin, Tomas
    2010 INTERNATIONAL CONFERENCE ON APPLIED ELECTRONICS, 2010, : 167 - 170