INJECTING TEXT IN SELF-SUPERVISED SPEECH PRETRAINING

被引:8
|
作者
Chen, Zhehuai [1 ]
Zhang, Yu [1 ]
Rosenberg, Andrew [1 ]
Ramabhadran, Bhuvana [1 ]
Wang, Gary [1 ]
Moreno, Pedro [1 ]
机构
[1] Google Inc, Mountain View, CA 94043 USA
关键词
Speech Recognition; Speech Synthesis; Self-supervised; Representation learning; RECOGNITION;
D O I
10.1109/ASRU51503.2021.9688018
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised pretraining for Automated Speech Recognition (ASR) has shown varied degrees of success. In this paper, we propose to jointly learn representations during pretraining from two different modalities: speech and text. The proposed method, tts4pretrain complements the power of contrastive learning in self-supervision with linguistic/lexical representations derived from synthesized speech, effectively learning from untranscribed speech and unspoken text. Lexical learning in the speech encoder is enforced through an additional sequence loss term that is coupled with contrastive loss during pretraining. We demonstrate that this novel pretraining method yields Word Error Rate (WER) reductions of 10% relative on the well-benchmarked, Librispeech task over a state-of-the-art baseline pretrained with wav2vec2.0 only. The proposed method also serves as an effective strategy to compensate for the lack of transcribed speech, effectively matching the performance of 5000 hours of transcribed speech with just 100 hours of transcribed speech on the AMI meeting transcription task. Finally, we demonstrate WER reductions of up to 15% on an in-house Voice Search task over traditional pretraining. Incorporating text into encoder pretraining is complimentary to rescoring with a larger or in-domain language model, resulting in additional 6% relative reduction in WER.
引用
收藏
页码:251 / 258
页数:8
相关论文
共 50 条
  • [1] Self-Supervised Pretraining Improves Self-Supervised Pretraining
    Reed, Colorado J.
    Yue, Xiangyu
    Nrusimha, Ani
    Ebrahimi, Sayna
    Vijaykumar, Vivek
    Mao, Richard
    Li, Bo
    Zhang, Shanghang
    Guillory, Devin
    Metzger, Sean
    Keutzer, Kurt
    Darrell, Trevor
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1050 - 1060
  • [2] Investigating Self-supervised Pretraining Frameworks for Pathological Speech Recognition
    Violeta, Lester Phillip
    Huang, Wen-Chin
    Toda, Tomoki
    INTERSPEECH 2022, 2022, : 41 - 45
  • [3] Investigating Self-supervised Pretraining Frameworks for Pathological Speech Recognition
    Violeta, Lester Phillip
    Huang, Wen-Chin
    Toda, Tomoki
    arXiv, 2022,
  • [4] SPeCiaL: Self-supervised Pretraining for Continual Learning
    Caccia, Lucas
    Pineau, Joelle
    CONTINUAL SEMI-SUPERVISED LEARNING, CSSL 2021, 2022, 13418 : 91 - 103
  • [5] SEMI-SUPERVISED SPOKEN LANGUAGE UNDERSTANDING VIA SELF-SUPERVISED SPEECH AND LANGUAGE MODEL PRETRAINING
    Lai, Cheng-, I
    Chuang, Yung-Sung
    Lee, Hung-Yi
    Li, Shang-Wen
    Glass, James
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7468 - 7472
  • [6] Instance Localization for Self-supervised Detection Pretraining
    Yang, Ceyuan
    Wu, Zhirong
    Zhou, Bolei
    Lin, Stephen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3986 - 3995
  • [7] Automatic self-supervised learning of associations between speech and text
    Knuuttila, Juho
    Rasanen, Okko
    Laine, Unto K.
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 465 - 469
  • [8] A Masked Self-Supervised Pretraining Method for Face Parsing
    Li, Zhuang
    Cao, Leilei
    Wang, Hongbin
    Xu, Lihong
    MATHEMATICS, 2022, 10 (12)
  • [9] Progressive Self-Supervised Pretraining for Hyperspectral Image Classification
    Guan, Peiyan
    Lam, Edmund Y.
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 13
  • [10] Self-supervised Pretraining Isolated Forest for Outlier Detection
    Liang, Dong
    Wang, Jun
    Gao, Xiaoyu
    Wang, Jiahui
    Zhao, Xiaoyong
    Wang, Lei
    2022 INTERNATIONAL CONFERENCE ON BIG DATA, INFORMATION AND COMPUTER NETWORK (BDICN 2022), 2022, : 306 - 310