Keyword Search Using Attention-Based End-to-End ASR and Frame-Synchronous Phoneme Alignments

被引:8
|
作者
Yang, Runyan [1 ,2 ]
Cheng, Gaofeng [1 ]
Miao, Haoran [1 ,2 ]
Li, Ta [1 ]
Zhang, Pengyuan [1 ,2 ]
Yan, Yonghong [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Acoust, Key Lab Speech Acoust & Content Understanding, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
关键词
Task analysis; Hidden Markov models; Transformers; Speech recognition; Reliability; Training; Decoding; End-to-end speech recognition; keyword search; phoneme alignment; keyword confidence scoring; SPEECH RECOGNITION; NEURAL-NETWORKS; TRANSFORMER; DROPOUT; MODEL;
D O I
10.1109/TASLP.2021.3120632
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Attention-based end-to-end (E2E) automatic speech recognition (ASR) architectures are now the state-of-the-art in terms of recognition performance. However, despite their effectiveness, they have not been widely applied in keyword search (KWS) tasks yet. In this paper, we propose the Att-E2E-KWS architecture, an attention-based E2E ASR framework for KWS that can afford accurate and reliable keyword retrieval results. First, we design a basic framework to carry out KWS based on attention-based E2E ASR. We adopt the connectionist temporal classification and attention (CTC/Att) joint E2E ASR architecture and exploit the spike posterior property of CTC to provide the keywords time stamps. Second, we introduce the frame-synchronous phonemes modeling and use the dynamic programming (DP) algorithm to provide alignments between E2E grapheme outputs and phoneme outputs. We call this alignment procedure dynamic time alignment (DTA), which can provide the proposed Att-E2E-KWS system with more accurate time stamps and reliable confidence scores. Third, we use the Transformer, a self-attention-based encoder-decoder neural network, in place of conventional recurrent neural networks in order to yield more parallelizable models and increased training speed. We conduct comprehensive experiments on English and Mandarin Chinese. To the best of our knowledge, this is the first practical Att-E2E-KWS framework, and experimental results on Switchboard and HKUST corpora show that our proposed Att-E2E-KWS systems significantly outperform the CTC E2E ASR based KWS baselines.
引用
收藏
页码:3202 / 3215
页数:14
相关论文
共 50 条
  • [1] ETEH: Unified Attention-Based End-to-End ASR and KWS Architecture
    Cheng, Gaofeng
    Miao, Haoran
    Yang, Runyan
    Deng, Keqi
    Yan, Yonghong
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1360 - 1373
  • [2] UNSUPERVISED SPEAKER ADAPTATION USING ATTENTION-BASED SPEAKER MEMORY FOR END-TO-END ASR
    Sari, Leda
    Moritz, Niko
    Hori, Takaaki
    Le Roux, Jonathan
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7384 - 7388
  • [3] ATTENTION-BASED END-TO-END SPEECH RECOGNITION ON VOICE SEARCH
    Shan, Changhao
    Zhang, Junbo
    Wang, Yujun
    Xie, Lei
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 4764 - 4768
  • [4] Attention-based End-to-End Models for Small-Footprint Keyword Spotting
    Shan, Changhao
    Zhang, Junbo
    Wang, Yujun
    Xie, Lei
    [J]. 19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 2037 - 2041
  • [5] END-TO-END ASR-FREE KEYWORD SEARCH FROM SPEECH
    Audhkhasi, Kartik
    Rosenberg, Andrew
    Sethy, Abhinav
    Ramabhadran, Bhuvana
    Kingsbury, Brian
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 4840 - 4844
  • [6] End-to-End ASR-Free Keyword Search From Speech
    Audhkhasi, Kartik
    Rosenberg, Andrew
    Sethy, Abhinav
    Ramabhadran, Bhuvana
    Kingsbury, Brian
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2017, 11 (08) : 1351 - 1359
  • [7] Towards Efficiently Learning Monotonic Alignments for Attention-Based End-to-End Speech Recognition
    Miao, Chenfeng
    Zou, Kun
    Zhuang, Ziyang
    Wei, Tao
    Ma, Jun
    Wang, Shaojun
    Xiao, Jing
    [J]. INTERSPEECH 2022, 2022, : 1051 - 1055
  • [8] IMPROVING ATTENTION-BASED END-TO-END ASR SYSTEMS WITH SEQUENCE-BASED LOSS FUNCTIONS
    Cui, Jia
    Weng, Chao
    Wang, Guangsen
    Wang, Jun
    Wang, Peidong
    Yu, Chengzhu
    Su, Dan
    Yu, Dong
    [J]. 2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018), 2018, : 353 - 360
  • [9] Improving Attention-based End-to-end ASR by Incorporating an N-gram Neural Network
    Ao, Junyi
    Ko, Tom
    [J]. 2021 12TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2021,
  • [10] ADVERSARIAL EXAMPLES FOR IMPROVING END-TO-END ATTENTION-BASED SMALL-FOOTPRINT KEYWORD SPOTTING
    Wang, Xiong
    Sun, Sining
    Shan, Changhao
    Hou, Jingyong
    Xie, Lei
    Li, Shen
    Lei, Xin
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6366 - 6370