Transfer Learning From Speech Synthesis to Voice Conversion With Non-Parallel Training Data

被引:24
|
作者
Zhang, Mingyang [1 ]
Zhou, Yi [2 ]
Zhao, Li [1 ]
Li, Haizhou [2 ,3 ]
机构
[1] Southeast Univ, Sch Informat Sci & Engn, Nanjing, Jiangsu, Peoples R China
[2] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore, Singapore
[3] Univ Bremen, Machine Listening Lab, D-28359 Bremen, Germany
基金
新加坡国家研究基金会;
关键词
Linguistics; Decoding; Transfer learning; Training data; Training; Encoding; Speech synthesis; Autoencoder; context vector; non-parallel; text -to-speech (TTS); transfer learning; voice conversion (VC); TEXT-TO-SPEECH; SPARSE REPRESENTATION; ADAPTATION;
D O I
10.1109/TASLP.2021.3066047
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
We present a novel voice conversion (VC) framework by learning from a text-to-speech (TTS) synthesis system, that is called TTS-VC transfer learning or TTL-VC for short. We first develop a multi-speaker speech synthesis system with sequence-to-sequence encoder-decoder architecture, where the encoder extracts the linguistic representations of input text, while the decoder, conditioned on target speaker embedding, takes the context vectors and the attention recurrent network cell output to generate target acoustic features. We take advantage of the fact that TTS system maps input text to speaker independent context vectors, thus re-purpose such a mapping to supervise the training of the latent representations of an encoder-decoder voice conversion system. In the voice conversion system, the encoder takes speech instead of text as the input, while the decoder is functionally similar to the TTS decoder. As we condition the decoder on a speaker embedding, the system can be trained on non-parallel data for any-to-any voice conversion. During voice conversion training, we present both text and speech to speech synthesis and voice conversion networks respectively. At run-time, the voice conversion network uses its own encoder-decoder architecture without the need of text input. Experiments show that the proposed TTL-VC system outperforms two competitive voice conversion baselines consistently, namely phonetic posteriorgram and AutoVC methods, in terms of speech quality, naturalness, and speaker similarity.
引用
收藏
页码:1290 / 1302
页数:13
相关论文
共 50 条
  • [1] SINGING VOICE CONVERSION WITH NON-PARALLEL DATA
    Chen, Xin
    Chu, Wei
    Guo, Jinxi
    Xu, Ning
    2019 2ND IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2019), 2019, : 292 - 296
  • [2] Parallel vs. Non-parallel Voice Conversion for Esophageal Speech
    Serrano, Luis
    Raman, Sneha
    Tavarez, David
    Navas, Eva
    Hernaez, Inma
    INTERSPEECH 2019, 2019, : 4549 - 4553
  • [3] VAW-GAN for Singing Voice Conversion with Non-parallel Training Data
    Lu, Junchen
    Zhou, Kun
    Sisman, Berrak
    Li, Haizhou
    2020 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2020, : 514 - 519
  • [4] NOVEL METRIC LEARNING FOR NON-PARALLEL VOICE CONVERSION
    Shah, Nirmesh J.
    Patil, Hemant A.
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3722 - 3726
  • [5] CVC: Contrastive Learning for Non-parallel Voice Conversion
    Li, Tingle
    Liu, Yichen
    Hu, Chenxu
    Zhao, Hang
    INTERSPEECH 2021, 2021, : 1324 - 1328
  • [6] Recognition-Synthesis Based Non-Parallel Voice Conversion with Adversarial Learning
    Zhang, Jing-Xuan
    Ling, Zhen-Hua
    Dai, Li-Rong
    INTERSPEECH 2020, 2020, : 771 - 775
  • [7] NON-PARALLEL TRAINING FOR VOICE CONVERSION BASED ON ADAPTATION METHOD
    Song, Peng
    Zheng, Wenming
    Zhao, Li
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 6905 - 6909
  • [8] NON-PARALLEL MANY-TO-MANY VOICE CONVERSION BY KNOWLEDGE TRANSFER FROM A TEXT-TO-SPEECH MODEL
    Yu, Xinyuan
    Mak, Brian
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 5924 - 5928
  • [9] Mixture of Factor Analyzers Using Priors From Non-Parallel Speech for Voice Conversion
    Wu, Zhizheng
    Kinnunen, Tomi
    Chng, Eng Siong
    Li, Haizhou
    IEEE SIGNAL PROCESSING LETTERS, 2012, 19 (12) : 914 - 917
  • [10] Phoneme-guided Dysarthric speech conversion With non-parallel data by joint training
    Xunquan Chen
    Atsuki Oshiro
    Jinhui Chen
    Ryoichi Takashima
    Tetsuya Takiguchi
    Signal, Image and Video Processing, 2022, 16 : 1641 - 1648