Joint syntactic and semantic analysis, with a multitask Deep Learning Framework for Spoken Language Understanding

被引:2
|
作者
Tafforeau, Jeremie [1 ]
Bechet, Frederic [1 ]
Artiere, Thierry [1 ,2 ]
Favre, Benoit [1 ]
机构
[1] Aix Marseille Univ, CNRS, LIF, UMR 7279, F-13000 Marseille, France
[2] Ecole Cent Marseille, F-13000 Marseille, France
关键词
Spoken Language Understanding; Recurrent Neural Networks; Long Short Term Memory; FrameNet parsing; Multitask;
D O I
10.21437/Interspeech.2016-851
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Spoken Language Understanding (SLU) models have to deal with Automatic Speech Recognition outputs which are prone to contain errors. Most of SLU models overcome this issue by directly predicting semantic labels from words without any deep linguistic analysis. This is acceptable when enough training data is available to train SLU models in a supervised way. However for open-domain SLU, such annotated corpus is not easily available or very expensive to obtain, and generic syntactic and semantic models, such as dependency parsing, Semantic Role Labeling (SRL) or FrameNet parsing are good candidates if they can be applied to noisy ASR transcriptions with enough robustness. To tackle this issue we present in this paper an RNN-based architecture for performing joint syntactic and semantic parsing tasks on noisy ASR outputs. Experiments carried on a corpus of French spoken conversations collected in a telephone call-centre are reported and show that our strategy brings an improvement over the standard pipeline approach while allowing a lot more flexibility in the model design and optimization.
引用
收藏
页码:3260 / 3264
页数:5
相关论文
共 50 条
  • [1] Multitask learning for spoken language understanding
    Tur, Gokhan
    [J]. 2006 IEEE International Conference on Acoustics, Speech and Signal Processing, Vols 1-13, 2006, : 585 - 588
  • [2] A Joint Learning Framework With BERT for Spoken Language Understanding
    Zhang, Zhichang
    Zhang, Zhenwen
    Chen, Haoyuan
    Zhang, Zhiman
    [J]. IEEE ACCESS, 2019, 7 : 168849 - 168858
  • [3] Spoken language understanding with kernels for syntactic/semantic structures
    Moschitti, Alessandro
    Riccardi, Giuseppe
    Raymond, Christian
    [J]. 2007 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING, VOLS 1 AND 2, 2007, : 183 - 188
  • [4] MULTITASK LEARNING FOR LOW RESOURCE SPOKEN LANGUAGE UNDERSTANDING
    Meeus, Quentin
    Moens, Marie Francine
    Van Hamme, Hugo
    [J]. INTERSPEECH 2022, 2022, : 4073 - 4077
  • [5] A JOINT MULTI-TASK LEARNING FRAMEWORK FOR SPOKEN LANGUAGE UNDERSTANDING
    Li, Changliang
    Kong, Cunliang
    Zhao, Yan
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 6054 - 6058
  • [6] Survey on Joint Modeling Algorithms for Spoken Language Understanding Based on Deep Learning
    Wei P.-F.
    Zeng B.
    Wang M.-H.
    Zeng A.
    [J]. Ruan Jian Xue Bao/Journal of Software, 2022, 33 (11): : 4192 - 4216
  • [7] SCREEN: Learning a flat syntactic and semantic spoken language analysis using artificial neural networks
    Wermter, S
    Weber, V
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 1997, 6 : 35 - 85
  • [8] Joint Discriminative Decoding of Words and Semantic Tags for Spoken Language Understanding
    Deoras, Anoop
    Tur, Gokhan
    Sarikaya, Ruhi
    Hakkani-Tuer, Dilek
    [J]. IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2013, 21 (08): : 1612 - 1621
  • [9] Research on Spoken Language Understanding Based on Deep Learning
    Yanli Hui
    [J]. SCIENTIFIC PROGRAMMING, 2021, 2021
  • [10] Deep Belief Network based Semantic Taggers for Spoken Language Understanding
    Deoras, Anoop
    Sarikaya, Ruhi
    [J]. 14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 2712 - 2716