SCALING END-TO-END MODELS FOR LARGE-SCALE MULTILINGUAL ASR

被引:14
|
作者
Li, Bo [1 ]
Pang, Ruoming [1 ]
Sainath, Tara N. [1 ]
Gulati, Anmol [1 ]
Zhang, Yu [1 ]
Qin, James [1 ]
Haghani, Parisa [1 ]
Huang, W. Ronny [1 ]
Ma, Min [1 ]
Bai, Junwen [1 ]
机构
[1] Google, Mountain View, CA 94043 USA
关键词
large-scale; multilingual speech recognition;
D O I
10.1109/ASRU51503.2021.9687871
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Building ASR models across many languages is a challenging multitask learning problem due to large variations and heavily unbalanced data. Existing work has shown positive transfer from high resource to low resource languages. However, degradations on high resource languages are commonly observed due to interference from the heterogeneous multilingual data and reduction in per-language capacity. We conduct a capacity study on a 15-language task, with the amount of data per language varying from 7.6K to 53.5K hours. We adopt GShard [1] to efficiently scale up to 10B parameters. Empirically, we find that (1) scaling the number of model parameters is an effective way to solve the capacity bottleneck - our 500M-param model already outperforms monolingual baselines and scaling it to 1B and 10B brought further quality gains; (2) larger models are not only more data efficient, but also more efficient in terms of training cost as measured in TPU days - the 1B -param model reaches the same accuracy at 34% of training time as the 500M-param model; (3) given a fixed capacity budget, adding depth works better than width and large encoders do better than large decoders; (4) with continuous training, they can be adapted to new languages and domains.
引用
收藏
页码:1011 / 1018
页数:8
相关论文
共 50 条
  • [1] Large-Scale Multilingual Speech Recognition with a Streaming End-to-End Model
    Kannan, Anjuli
    Datta, Arindrima
    Sainath, Tara N.
    Weinstein, Eugene
    Ramabhadran, Bhuvana
    Wu, Yonghui
    Bapna, Ankur
    Chen, Zhifeng
    Lee, Seungji
    INTERSPEECH 2019, 2019, : 2130 - 2134
  • [2] End-to-end Learning of Driving Models from Large-scale Video Datasets
    Xu, Huazhe
    Gao, Yang
    Yu, Fisher
    Darrell, Trevor
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3530 - 3538
  • [3] Phonemic competition in end-to-end ASR models
    ten Bosch, Louis
    Bentum, Martijn
    Boves, Lou
    INTERSPEECH 2023, 2023, : 586 - 590
  • [4] END-TO-END APPROACH TO LARGE-SCALE MULTIMEDIA DISSEMINATION
    YAVATKAR, R
    MANOJ, L
    COMPUTER COMMUNICATIONS, 1994, 17 (03) : 205 - 217
  • [5] AN INVESTIGATION OF MULTILINGUAL ASR USING END-TO-END LF-MMI
    Tong, Sibo
    Garner, Philip N.
    Bourlard, Herve
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6061 - 6065
  • [6] Multiple Softmax Architecture for Streaming Multilingual End-to-End ASR Systems
    Joshi, Vikas
    Das, Amit
    Sun, Eric
    Mehta, Rupesh R.
    Li, Jinyu
    Gong, Yifan
    INTERSPEECH 2021, 2021, : 1767 - 1771
  • [7] Large-Scale End-to-End Multilingual Speech Recognition and Language Identification with Multi-Task Learning
    Hou, Wenxin
    Dong, Yue
    Zhuang, Bairong
    Yang, Longfei
    Shi, Jiatong
    Shinozaki, Takahiro
    INTERSPEECH 2020, 2020, : 1037 - 1041
  • [8] A large-scale dataset for end-to-end table recognition in the wild
    Fan Yang
    Lei Hu
    Xinwu Liu
    Shuangping Huang
    Zhenghui Gu
    Scientific Data, 10
  • [9] An end-to-end workflow pipeline for large-scale Grid computing
    McGough A.S.
    Cohen J.
    Darlington J.
    Katsiri E.
    Lee W.
    Panagiotidi S.
    Patel Y.
    Journal of Grid Computing, 2005, 3 (3-4) : 259 - 281
  • [10] A large-scale dataset for end-to-end table recognition in the wild
    Yang, Fan
    Hu, Lei
    Liu, Xinwu
    Huang, Shuangping
    Gu, Zhenghui
    SCIENTIFIC DATA, 2023, 10 (01)