Unsupervised speaker adaptation for speaker independent acoustic to articulatory speech inversion

被引:21
|
作者
Sivaraman, Ganesh [1 ]
Mitra, Vikramjit [1 ]
Nam, Hosung [2 ]
Tiede, Mark [3 ]
Espy-Wilson, Carol [1 ]
机构
[1] Univ Maryland, Elect & Comp Engn, College Pk, MD 20740 USA
[2] Korea Univ, Seoul, South Korea
[3] Haskins Labs Inc, New Haven, CT 06511 USA
来源
基金
美国国家科学基金会;
关键词
VOCAL-TRACT; MOVEMENTS;
D O I
10.1121/1.5116130
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Speech inversion is a well-known ill-posed problem and addition of speaker differences typically makes it even harder. Normalizing the speaker differences is essential to effectively using multi-speaker articulatory data for training a speaker independent speech inversion system. This paper explores a vocal tract length normalization (VTLN) technique to transform the acoustic features of different speakers to a target speaker acoustic space such that speaker specific details are minimized. The speaker normalized features are then used to train a deep feed-forward neural network based speech inversion system. The acoustic features are parameterized as time-contextualized mel-frequency cepstral coefficients. The articulatory features are represented by six tract-variable (TV) trajectories, which are relatively speaker invariant compared to flesh point data. Experiments are performed with ten speakers from the University of Wisconsin X-ray microbeam database. Results show that the proposed speaker normalization approach provides an 8.15% relative improvement in correlation between actual and estimated TVs as compared to the system where speaker normalization was not performed. To determine the efficacy of the method across datasets, cross speaker evaluations were performed across speakers from the Multichannel Articulatory-TIMIT and EMA-IEEE datasets. Results prove that the VTLN approach provides improvement in performance even across datasets. (C) 2019 Acoustical Society of America.
引用
收藏
页码:316 / 329
页数:14
相关论文
共 50 条
  • [21] Unsupervised speaker adaptation using reference speaker weighting
    Lai, Tsz-Chung
    Mak, Brian
    [J]. CHINESE SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, 2006, 4274 : 380 - +
  • [22] Unsupervised speaker adaptation for robust speech recognition in real environments
    Yamade, S
    Baba, A
    Yoshikawa, S
    Lee, A
    Saruwatari, H
    Shikano, K
    [J]. ELECTRONICS AND COMMUNICATIONS IN JAPAN PART II-ELECTRONICS, 2005, 88 (08): : 30 - 41
  • [23] Multi-speaker articulatory trajectory formation based on speaker-independent articulatory HMMs
    Hiroya, Sadao
    Mochida, Takemi
    [J]. SPEECH COMMUNICATION, 2006, 48 (12) : 1677 - 1690
  • [24] Batch Normalization based Unsupervised Speaker Adaptation for Acoustic Models
    Yi, Jiangyan
    Tao, Jianhua
    [J]. 2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2019, : 176 - 180
  • [25] Speaker conditioned acoustic-to-articulatory inversion using x-vectors
    Illa, Aravind
    Ghosh, Prasanta Kumar
    [J]. INTERSPEECH 2020, 2020, : 1376 - 1380
  • [26] Domain adaptation towards speaker-independent ultrasound tongue imaging based articulatory-to-acoustic conversion
    You, Kang
    Xu, Kele
    Wang, Jilong
    Feng, Ming
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2023, 153 (03):
  • [27] Discriminative speaker adaptation using articulatory features
    Metze, Florian
    [J]. SPEECH COMMUNICATION, 2007, 49 (05) : 348 - 360
  • [28] N-Best-based unsupervised speaker adaptation for speech recognition
    Matsui, T
    Furui, S
    [J]. COMPUTER SPEECH AND LANGUAGE, 1998, 12 (01): : 41 - 50
  • [29] Combination of Acoustic and Lexical Speaker Adaptation for Disordered Speech Recognition
    Saz, Oscar
    Lleida, Eduardo
    Miguel, Antonio
    [J]. INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, VOLS 1-5, 2009, : 540 - 543
  • [30] Speaker independent acoustic modeling using speaker normalization
    Ishii, J
    Fukada, T
    [J]. PROCEEDINGS OF THE 1998 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-6, 1998, : 97 - 100