Adapting Large-Scale Pre-trained Models for Uni ed Dialect Speech Recognition Model

被引:0
|
作者
Toyama, T. [1 ]
Kai, A. [1 ]
Kamiya, Y. [1 ]
Takahashi, N. [1 ]
机构
[1] Graduate School of Integrated Science and Technology, Shizuoka University, 3-5-1 Johoku, Chuo-ku, Shizuoka, Hamamatsu, Japan
关键词
All Open Access; Gold;
D O I
10.12693/APhysPolA.146.413
中图分类号
学科分类号
摘要
15
引用
收藏
页码:413 / 418
相关论文
共 50 条
  • [1] Alternating Recurrent Dialog Model with Large-scale Pre-trained Language Models
    Wu, Qingyang
    Zhang, Yichi
    Li, Yu
    Yu, Zhou
    16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 1292 - 1301
  • [2] Training-Free Deepfake Voice Recognition by Leveraging Large-Scale Pre-Trained Models
    Pianese, Alessandro
    Poggi, Giovanni
    Cozzolino, Davide
    Verdoliva, Luisa
    PROCEEDINGS OF THE 2024 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2024, 2024, : 289 - 294
  • [3] CPM: A large-scale generative Chinese Pre-trained language model
    Zhang, Zhengyan
    Han, Xu
    Zhou, Hao
    Ke, Pei
    Gu, Yuxian
    Ye, Deming
    Qin, Yujia
    Su, Yusheng
    Ji, Haozhe
    Guan, Jian
    Qi, Fanchao
    Wang, Xiaozhi
    Zheng, Yanan
    Zeng, Guoyang
    Cao, Huanqi
    Chen, Shengqi
    Li, Daixuan
    Sun, Zhenbo
    Liu, Zhiyuan
    Huang, Minlie
    Han, Wentao
    Tang, Jie
    Li, Juanzi
    Zhu, Xiaoyan
    Sun, Maosong
    AI OPEN, 2021, 2 : 93 - 99
  • [4] Exploring the Application of Large-Scale Pre-Trained Models on Adverse Weather Removal
    Tan, Zhentao
    Wu, Yue
    Liu, Qiankun
    Chu, Qi
    Lu, Le
    Ye, Jieping
    Yu, Nenghai
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 1683 - 1698
  • [5] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
    Xiao Wang
    Guangyao Chen
    Guangwu Qian
    Pengcheng Gao
    Xiao-Yong Wei
    Yaowei Wang
    Yonghong Tian
    Wen Gao
    Machine Intelligence Research, 2023, 20 : 447 - 482
  • [6] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
    Wang, Xiao
    Chen, Guangyao
    Qian, Guangwu
    Gao, Pengcheng
    Wei, Xiao-Yong
    Wang, Yaowei
    Tian, Yonghong
    Gao, Wen
    MACHINE INTELLIGENCE RESEARCH, 2023, 20 (04) : 447 - 482
  • [7] FASTERMOE: Modeling and Optimizing Training of Large-Scale Dynamic Pre-Trained Models
    He, Jiaao
    Zhai, Jidong
    Antunes, Tiago
    Wang, Haojie
    Luo, Fuwen
    Shi, Shangfeng
    Li, Qin
    PPOPP'22: PROCEEDINGS OF THE 27TH ACM SIGPLAN SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING, 2022, : 120 - 134
  • [8] Bridging the Gap: Integrating Pre-trained Speech Enhancement and Recognition Models for Robust Speech Recognition
    Wang, Kuan-Chen
    Li, You-Jin
    Chen, Wei-Lun
    Chen, Yu-Wen
    Wang, Yi-Ching
    Yeh, Ping-Cheng
    Zhang, Chao
    Tsao, Yu
    32ND EUROPEAN SIGNAL PROCESSING CONFERENCE, EUSIPCO 2024, 2024, : 426 - 430
  • [9] How to Estimate Model Transferability of Pre-Trained Speech Models?
    Chen, Zih-Ching
    Yang, Chao-Han Huck
    Li, Bo
    Zhang, Yu
    Chen, Nanxin
    Chang, Shou-Yiin
    Prabhavalkar, Rohit
    Lee, Hung-yi
    Sainath, Tara N.
    INTERSPEECH 2023, 2023, : 456 - 460
  • [10] wav2vec-S: Adapting Pre-trained Speech Models for Streaming
    Fu, Biao
    Fan, Kai
    Liao, Minpeng
    Chen, Yidong
    Shi, Xiaodong
    Huang, Zhongqiang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 11465 - 11480