Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification

被引:18
|
作者
Sarkar, Achintya Kumar [1 ,2 ]
Tan, Zheng-Hua [1 ]
Tang, Hao [3 ]
Shon, Suwon [3 ]
Glass, James [3 ]
机构
[1] Aalborg Univ, Dept Elect Syst, DK-9220 Aalborg, Denmark
[2] VIT AP Univ, Sch Elect Engn, Amaravati 522237, India
[3] MIT, Comp Sci & Artificial Intelligence Lab, 77 Massachusetts Ave, Cambridge, MA 02139 USA
关键词
DNNs; time-contrastive learning; bottleneck feature; GMM-UBM; speaker verification; NEURAL-NETWORKS; DISCRIMINATIVE FEATURES; RECOGNITION; MODELS;
D O I
10.1109/TASLP.2019.2915322
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
There are a number of studies about extraction of bottleneck (BN) features from deep neural networks (DNNs) trained to discriminate speakers, pass-phrases, and triphone states for improving the performance of text-dependent speaker verification (TD-SV). However, a moderate success has been achieved. A recent study presented a time contrastive learning (TCL) concept to explore the non-stationarity of brain signals for classification of brain states. Speech signals have similar non-stationarity property, and TCL further has the advantage of having no need for labeled data. We therefore present a TCL based BN feature extraction method. The method uniformly partitions each speech utterance in a training dataset into a predefined number of multi-frame segments. Each segment in an utterance corresponds to one class, and class labels are shared across utterances. DNNs are then trained to discriminate all speech frames among the classes to exploit the temporal structure of speech. In addition, we propose a segment-based unsupervised clustering algorithm to re-assign class labels to the segments. TD-SV experiments were conducted on the RedDots challenge database. The TCL-DNNs were trained using speech data of fixed pass-phrases that were excluded from the TD-SV evaluation set, so the learned features can be considered phrase-independent. We compare the performance of the proposed TCL BN feature with those of short-time cepstral features and BN features extracted from DNNs discriminating speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels and boundaries are generated by three different automatic speech recognition (ASR) systems. Experimental results show that the proposed TCL-BN outperforms cepstral features and speaker+pass-phrase discriminant BN features, and its performance is on par with those of ASR derived BN features. Moreover, the clustering method improves the TD-SV performance of TCL-BN and ASR derived BN features with respect to their standalone counterparts. We further study the TD-SV performance of fusing cepstral and BN features.
引用
收藏
页码:1267 / 1279
页数:13
相关论文
共 50 条
  • [1] Tandem Deep Features for Text-Dependent Speaker Verification
    Fu, Tianfan
    Qian, Yanmin
    Liu, Yuan
    Yu, Kai
    [J]. 15TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2014), VOLS 1-4, 2014, : 1327 - 1331
  • [2] Deep Embedding Learning for Text-Dependent Speaker Verification
    Zhang, Peng
    Hu, Peng
    Zhang, Xueliang
    [J]. INTERSPEECH 2020, 2020, : 3461 - 3465
  • [3] Deep feature for text-dependent speaker verification
    Liu, Yuan
    Qian, Yanmin
    Chen, Nanxin
    Fu, Tianfan
    Zhang, Ya
    Yu, Kai
    [J]. SPEECH COMMUNICATION, 2015, 73 : 1 - 13
  • [4] Covariance Based Deep Feature for Text-Dependent Speaker Verification
    Wang, Shuai
    Dinkel, Heinrich
    Qian, Yanmin
    Yu, Kai
    [J]. INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING, 2018, 11266 : 231 - 242
  • [5] EXPLORING SEQUENTIAL CHARACTERISTICS IN SPEAKER BOTTLENECK FEATURE FOR TEXT-DEPENDENT SPEAKER VERIFICATION
    Chen, Liping
    Zhao, Yong
    Zhang, Shi-Xiong
    Li, Jie
    Ye, Guoli
    Soong, Frank
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5364 - 5368
  • [6] DEEP NEURAL NETWORK BASED POSTERIORS FOR TEXT-DEPENDENT SPEAKER VERIFICATION
    Dey, Subhadeep
    Madikeri, Srikanth
    Ferras, Marc
    Modicek, Petr
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 5050 - 5054
  • [7] Tandem Features for Text-dependent Speaker Verification on the RedDots Corpus
    Alam, Md Jahangir
    Kenny, Patrick
    Gupta, Vishwa
    [J]. 17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 420 - 424
  • [8] Text-dependent speaker verification system
    Qin, Bing
    Chen, Huipeng
    Li, Guangqi
    Liu, Songbo
    [J]. Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 2000, 32 (04): : 16 - 18
  • [9] On Training Targets and Activation Functions for Deep Representation Learning in Text-Dependent Speaker Verification
    Sarkar, Achintya Kumar
    Tan, Zheng-Hua
    [J]. ACOUSTICS, 2023, 5 (03): : 693 - 713
  • [10] Improved Deep Speaker Feature Learning for Text-Dependent Speaker Recognition
    Li, Lantian
    Lin, Yiye
    Zhang, Zhiyong
    Wang, Dong
    [J]. 2015 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2015, : 426 - 429