SpeechPrompt: Prompting Speech Language Models for Speech Processing Tasks

被引:0
|
作者
Chang, Kai-Wei [1 ]
Wu, Haibin [1 ]
Wang, Yu-Kai [2 ]
Wu, Yuan-Kuei [1 ]
Shen, Hua [3 ]
Tseng, Wei-Cheng [4 ]
Kang, Iu-Thing [5 ]
Li, Shang-Wen [6 ]
Lee, Hung-Yi [1 ]
机构
[1] Natl Taiwan Univ, Grad Inst Commun Engn, Taipei City 10617, Taiwan
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Univ Michigan, Ann Arbor, MI 48109 USA
[4] Univ Texas Austin, Austin, TX 78712 USA
[5] MediaTek, Hsinchu 30078, Taiwan
[6] FAIR, Menlo Pk, CA 94025 USA
关键词
Task analysis; Speech processing; Computational modeling; Adaptation models; Tuning; Self-supervised learning; Feature extraction; Prompting; speech language model; self-supervised learning; representation learning; REPRESENTATION;
D O I
10.1109/TASLP.2024.3436618
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Prompting has become a practical method for utilizing pre-trained language models (LMs). This approach offers several advantages. It allows an LM to adapt to new tasks with minimal training and parameter updates, thus achieving efficiency in both storage and computation. Additionally, prompting modifies only the LM's inputs and harnesses the generative capabilities of language models to address various downstream tasks in a unified manner. This significantly reduces the need for human labor in designing task-specific models. These advantages become even more evident as the number of tasks served by the LM scales up. Motivated by the strengths of prompting, we are the first to explore the potential of prompting speech LMs in the domain of speech processing. Recently, there has been a growing interest in converting speech into discrete units for language modeling. Our pioneer research demonstrates that these quantized speech units are highly versatile within our unified prompting framework. Not only can they serve as class labels, but they also contain rich phonetic information that can be re-synthesized back into speech signals for speech generation tasks. Specifically, we reformulate speech processing tasks into speech-to-unit generation tasks. As a result, we can seamlessly integrate tasks such as speech classification, sequence generation, and speech generation within a single, unified prompting framework. The experiment results show that the prompting method can achieve competitive performance compared to the strong fine-tuning method based on self-supervised learning models with a similar number of trainable parameters. The prompting method also shows promising results in the few-shot setting. Moreover, with the advanced speech LMs coming into the stage, the proposed prompting framework attains great potential.
引用
收藏
页码:3730 / 3744
页数:15
相关论文
共 50 条
  • [41] Identification of regional dialects of Telugu language using text independent speech processing models
    Shivaprasad, S.
    Sadanandam, M.
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2020, 23 (02) : 251 - 258
  • [42] Models and Approaches for Comprehension of Dysarthric Speech Using Natural Language Processing: Systematic Review
    Alaka, Benard
    Shibwabo, Bernard
    JMIR REHABILITATION AND ASSISTIVE TECHNOLOGIES, 2023, 10 (01)
  • [43] Gaussian mixture language models for speech recognition
    Afify, Mohamed
    Siohan, Olivier
    Sarikaya, Ruhi
    2007 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PTS 1-3, 2007, : 29 - +
  • [44] Language Models for Tamil Speech Recognition System
    Saraswathi, S.
    Geetha, T. V.
    IETE TECHNICAL REVIEW, 2007, 24 (05) : 375 - 383
  • [45] Improving language models for radiology speech recognition
    Paulett, John M.
    Langlotz, Curtis P.
    JOURNAL OF BIOMEDICAL INFORMATICS, 2009, 42 (01) : 53 - 58
  • [46] Discriminative training of language models for speech recognition
    Kuo, KHJ
    Fosler-Lussier, E
    Jiang, H
    Lee, CH
    2002 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I-IV, PROCEEDINGS, 2002, : 325 - 328
  • [47] BAYESIAN TRANSFORMER LANGUAGE MODELS FOR SPEECH RECOGNITION
    Xue, Boyang
    Yu, Jianwei
    Xu, Junhao
    Liu, Shansong
    Hu, Shoukang
    Ye, Zi
    Geng, Mengzhe
    Liu, Xunying
    Meng, Helen
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7378 - 7382
  • [48] GEOGRAPHIC LANGUAGE MODELS FOR AUTOMATIC SPEECH RECOGNITION
    Xiao, Xiaoqiang
    Chen, Hong
    Zylak, Mark
    Sosa, Daniela
    Desu, Suma
    Krishnamoorthy, Mahesh
    Liu, Daben
    Paulik, Matthias
    Zhang, Yuchen
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 6124 - 6128
  • [49] Syntactic Reanalysis in Language Models for Speech Recognition
    Twiefel, Johannes
    Hinaut, Xavier
    Wermter, Stefan
    2017 THE SEVENTH JOINT IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL-EPIROB), 2017, : 215 - 220
  • [50] Dirichlet Class Language Models for Speech Recognition
    Chien, Jen-Tzung
    Chueh, Chuang-Hua
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2011, 19 (03): : 482 - 495