On joint training with interfaces for spoken language understanding

被引:1
|
作者
Raju, Anirudh [1 ]
Rao, Milind [1 ]
Tiwari, Gautam [1 ]
Dheram, Pranav [1 ]
Anderson, Bryan [1 ]
Zhang, Zhe [1 ]
Lee, Chul [1 ]
Bui, Bach [1 ]
Rastrow, Ariya [1 ]
机构
[1] Amazon Alexa AI, San Mateo, CA 94404 USA
来源
INTERSPEECH 2022 | 2022年
关键词
speech recognition; spoken language understanding; neural interfaces; multitask training; NETWORKS; ASR;
D O I
10.21437/Interspeech.2022-11067
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Spoken language understanding (SLU) systems extract both text transcripts and semantics associated with intents and slots from input speech utterances. SLU systems usually consist of (1) an automatic speech recognition (ASR) module, (2) an interface module that exposes relevant outputs from ASR, and (3) a natural language understanding (NLU) module. Interfaces in SLU systems carry information on text transcriptions or richer information like neural embeddings from ASR to NLU. In this paper, we study how interfaces affect joint-training for spoken language understanding. Most notably, we obtain the state-of-the-art results on the publicly available 50-hr SLURP [1] dataset. We first leverage large-size pretrained ASR and NLU models that are connected by a text interface, and then jointly train both models via a sequence loss function. For scenarios where pretrained models are not utilized, the best results are obtained through a joint sequence loss training using richer neural interfaces. Finally, we show the overall diminishing impact of leveraging pretrained models with increased training data size.
引用
收藏
页码:1253 / 1257
页数:5
相关论文
共 50 条
  • [31] A New Pre-training Method for Training Deep Learning Models with Application to Spoken Language Understanding
    Celikyilmaz, Asli
    Sarikaya, Ruhi
    Hakkani-Tur, Dilek
    Liu, Xiaohu
    Ramesh, Nikhil
    Tur, Gokhan
    17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 3255 - 3259
  • [32] TEMPORAL STRUCTURE OF SPOKEN LANGUAGE UNDERSTANDING
    MARSLENWILSON, W
    TYLER, LK
    COGNITION, 1980, 8 (01) : 1 - 71
  • [33] Active learning for spoken language understanding
    Tur, G
    Schapire, RE
    Hakkani-Tür, D
    2003 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL I, PROCEEDINGS: SPEECH PROCESSING I, 2003, : 276 - 279
  • [34] Discriminative Reranking for Spoken Language Understanding
    Dinarelli, Marco
    Moschitti, Alessandro
    Riccardi, Giuseppe
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2012, 20 (02): : 526 - 539
  • [35] SENTENCE SIMPLIFICATION FOR SPOKEN LANGUAGE UNDERSTANDING
    Tur, Gokhan
    Hakkani-Tuer, Dilek
    Heck, Larry
    Parthasarathy, S.
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 5628 - 5631
  • [36] Temporal Generalization for Spoken Language Understanding
    Gaspers, Judith
    Kumar, Anoop
    Ver Steeg, Greg
    Galstyan, Aram
    Ai, Amazon Alexa
    2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, NAACL-HLT 2022, 2022, : 37 - 44
  • [37] Model adaptation for spoken language understanding
    Tur, G
    2005 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS 1-5: SPEECH PROCESSING, 2005, : 41 - 44
  • [38] Grammar learning for spoken language understanding
    Wang, YY
    Acero, A
    ASRU 2001: IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING, CONFERENCE PROCEEDINGS, 2001, : 292 - 295
  • [39] UNDERSTANDING SPOKEN LANGUAGE - WALKER,DE
    IIVONEN, A
    COMPUTERS AND THE HUMANITIES, 1982, 16 (01): : 45 - 47
  • [40] A mixed approach to spoken language understanding
    Liu, JY
    Wang, C
    Proceedings of the 2005 IEEE International Conference on Natural Language Processing and Knowledge Engineering (IEEE NLP-KE'05), 2005, : 169 - 173