Meta-TTS: Meta-Learning for Few-Shot Speaker Adaptive Text-to-Speech

被引:14
|
作者
Huang, Sung-Feng [1 ]
Lin, Chyi-Jiunn [2 ]
Liu, Da-Rong [1 ]
Chen, Yi-Chen [1 ]
Lee, Hung-yi [2 ]
机构
[1] Natl Taiwan Univ, Grad Inst Commun Engn, Taipei 10617, Taiwan
[2] Natl Taiwan Univ, Dept Elect Engn, Taipei 10617, Taiwan
关键词
Task analysis; Adaptation models; Training; Encoding; Decoding; Cloning; Testing; MAML; TTS; speaker adaptation; few-shot; meta-learning;
D O I
10.1109/TASLP.2022.3167258
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Personalizing a speech synthesis system is a highly desired application, where the system can generate speech with the user's voice with rare enrolled recordings. There are two main approaches to build such a system in recent works: speaker adaptation and speaker encoding. On the one hand, speaker adaptation methods fine-tune a trained multi-speaker text-to-speech (TTS) model with few enrolled samples. However, they require at least thousands of fine-tuning steps for high-quality adaptation, making it hard to apply on devices. On the other hand, speaker encoding methods encode enrollment utterances into a speaker embedding. The trained TTS model can synthesize the user's speech conditioned on the corresponding speaker embedding. Nevertheless, the speaker encoder suffers from the generalization gap between the seen and unseen speakers. In this paper, we propose applying a meta-learning algorithm to the speaker adaptation method. More specifically, we use Model Agnostic Meta-Learning (MAML) as the training algorithm of a multi-speaker TTS model, which aims to find a great meta-initialization to adapt the model to any few-shot speaker adaptation tasks quickly. Therefore, we can also adapt the meta-trained TTS model to unseen speakers efficiently. Our experiments compare the proposed method (Meta-TTS) with two baselines: a speaker adaptation method baseline and a speaker encoding method baseline. The evaluation results show that Meta-TTS can synthesize high speaker-similarity speech from few enrollment samples with fewer adaptation steps than the speaker adaptation baseline and outperforms the speaker encoding baseline under the same training scheme. When the speaker encoder of the baseline is pre-trained with extra 8371 speakers of data, Meta-TTS can still outperform the baseline on LibriTTS dataset and achieve comparable results on VCTK dataset.
引用
收藏
页码:1558 / 1571
页数:14
相关论文
共 50 条
  • [1] Meta-SE: A Meta-Learning Framework for Few-Shot Speech Enhancement
    Zhou, Weili
    Lu, Mingliang
    Ji, Ruijie
    [J]. IEEE ACCESS, 2021, 9 : 46068 - 46078
  • [2] Few-shot short utterance speaker verification using meta-learning
    Wang, Weijie
    Zhao, Hong
    Yang, Yikun
    Chang, YouKang
    You, Haojie
    [J]. PEERJ COMPUTER SCIENCE, 2023, 9
  • [3] Few-shot short utterance speaker verification using meta-learning
    Wang, Weijie
    Zhao, Hong
    Yang, Yikun
    Chang, YouKang
    You, Haojie
    [J]. PeerJ Computer Science, 2023, 9
  • [4] Unsupervised meta-learning for few-shot learning
    Xu, Hui
    Wang, Jiaxing
    Li, Hao
    Ouyang, Deqiang
    Shao, Jie
    [J]. PATTERN RECOGNITION, 2021, 116
  • [5] Fast Adaptive Meta-Learning for Few-Shot Image Generation
    Phaphuangwittayakul, Aniwat
    Guo, Yi
    Ying, Fangli
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 2205 - 2217
  • [6] Meta-Learning With Adaptive Learning Rates for Few-Shot Fault Diagnosis
    Chang, Liang
    Lin, Yan-Hui
    [J]. IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2022, 27 (06) : 5948 - 5958
  • [7] Meta-learning triplet contrast network for few-shot text classification
    Dong, Kaifang
    Jiang, Baoxing
    Li, Hongye
    Zhu, Zhenfang
    Liu, Peiyu
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 303
  • [8] MEDA: Meta-Learning with Data Augmentation for Few-Shot Text Classification
    Sun, Pengfei
    Ouyang, Yawen
    Zhang, Wenming
    Dai, Xin-yu
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3929 - 3935
  • [9] Meta-Learning for Few-Shot NMT Adaptation
    Sharaf, Amr
    Hassan, Hany
    Daume, Hal, III
    [J]. NEURAL GENERATION AND TRANSLATION, 2020, : 43 - 53
  • [10] Fair Meta-Learning For Few-Shot Classification
    Zhao, Chen
    Li, Changbin
    Li, Jincheng
    Chen, Feng
    [J]. 11TH IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE GRAPH (ICKG 2020), 2020, : 275 - 282