Adapting Large-Scale Pre-trained Models for Uni ed Dialect Speech Recognition Model

被引:0
|
作者
Toyama, T. [1 ]
Kai, A. [1 ]
Kamiya, Y. [1 ]
Takahashi, N. [1 ]
机构
[1] Graduate School of Integrated Science and Technology, Shizuoka University, 3-5-1 Johoku, Chuo-ku, Shizuoka, Hamamatsu, Japan
关键词
All Open Access; Gold;
D O I
10.12693/APhysPolA.146.413
中图分类号
学科分类号
摘要
15
引用
收藏
页码:413 / 418
相关论文
共 50 条
  • [21] Adapting Pre-trained Language Models to Rumor Detection on Twitter
    Slimi, Hamda
    Bounhas, Ibrahim
    Slimani, Yahya
    JOURNAL OF UNIVERSAL COMPUTER SCIENCE, 2021, 27 (10) : 1128 - 1148
  • [22] Large-Scale Relation Learning for Question Answering over Knowledge Bases with Pre-trained Language Models
    Yam, Yuanmeng
    Li, Rumei
    Wang, Sirui
    Zhang, Hongzhi
    Zan, Daoguang
    Zhang, Fuzheng
    Wu, Wei
    Xu, Weiran
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3653 - 3660
  • [23] Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models
    Huber, Patrick
    Carenini, Giuseppe
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 2376 - 2394
  • [24] An evaluation of large pre-trained models for gesture recognition using synthetic videos
    Reddy, Arun
    Shah, Ketul
    Rivera, Corban
    Paul, William
    De Melo, Celso M.
    Chellappa, Rama
    SYNTHETIC DATA FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: TOOLS, TECHNIQUES, AND APPLICATIONS II, 2024, 13035
  • [25] Assessing Phrase Break of ESL Speech with Pre-trained Language Models and Large Language Models
    Wang, Zhiyi
    Mao, Shaoguang
    Wu, Wenshan
    Xia, Yan
    Deng, Yan
    Tien, Jonathan
    INTERSPEECH 2023, 2023, : 4194 - 4198
  • [26] The ChatGPT After: Opportunities and Challenges of Very Large Scale Pre-trained Models
    Lu J.-W.
    Guo C.
    Dai X.-Y.
    Miao Q.-H.
    Wang X.-X.
    Yang J.
    Wang F.-Y.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (04): : 705 - 717
  • [27] TrafficBERT: Pre-trained model with large-scale data for long-range traffic flow forecasting
    Jin, KyoHoon
    Wi, JeongA
    Lee, EunJu
    Kang, ShinJin
    Kim, SooKyun
    Kim, YoungBin
    Expert Systems with Applications, 2021, 186
  • [28] TrafficBERT: Pre-trained model with large-scale data for long-range traffic flow forecasting
    Jin, KyoHoon
    Wi, JeongA
    Lee, EunJu
    Kang, ShinJin
    Kim, SooKyun
    Kim, YoungBin
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 186
  • [29] EBERT: A lightweight expression-enhanced large-scale pre-trained language model for mathematics education
    Duan, Zhiyi
    Gu, Hengnian
    Ke, Yuan
    Zhou, Dongdai
    KNOWLEDGE-BASED SYSTEMS, 2024, 300
  • [30] Improving Under-Resourced Code-Switched Speech Recognition: Large Pre-trained Models or Architectural Interventions
    van Vuren, Joshua Jansen
    Niesler, Thomas
    INTERSPEECH 2023, 2023, : 1439 - 1443