DISCRIMINATIVE FEATURE TRANSFORMS USING DIFFERENCED MAXIMUM MUTUAL INFORMATION

被引:0
|
作者
Delcroix, Marc [1 ]
Ogawa, Atsunori [1 ]
Watanabe, Shinji [1 ]
Nakatani, Tomohiro [1 ]
Nakamura, Atsushi [1 ]
机构
[1] NTT Corp, NTT Commun Sci Labs, Keihanna Sci City, Kyoto 6190237, Japan
关键词
Speech recognition; discriminative training; discriminative feature transforms; differenced MMI;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Recently feature compensation techniques that train feature transforms using a discriminative criterion have attracted much interest in the speech recognition community. Typically, the acoustic feature space is modeled by a Gaussian mixture model (GMM), and a feature transform is assigned to each Gaussian of the GMM. Feature compensation is then performed by transforming features using the transformation associated with each Gaussian, then summing up the transformed features weighted by the posterior probability of each Gaussian. Several discriminative criteria have been investigated for estimating the feature transformation parameters including maximum mutual information (MMI) and minimum phone error (MPE). Recently, the differenced MMI (dMMI) criterion that generalizes MMI and MPE, has been shown to provide competitive performance for acoustic model training. In this paper, we investigate the use of the dMMI criterion for discriminative feature transforms and demonstrate in a noisy speech recognition experiment that dMMI achieves recognition performance superior to that of MMI or MPE.
引用
收藏
页码:4753 / 4756
页数:4
相关论文
共 50 条
  • [41] Feature selection with missing data using mutual information estimators
    Doquire, Gauthier
    Verleysen, Michel
    [J]. NEUROCOMPUTING, 2012, 90 : 3 - 11
  • [42] Feature Selection for Chemical Sensor Arrays Using Mutual Information
    Wang, X. Rosalind
    Lizier, Joseph T.
    Nowotny, Thomas
    Berna, Amalia Z.
    Prokopenko, Mikhail
    Trowell, Stephen C.
    [J]. PLOS ONE, 2014, 9 (03):
  • [43] Stable feature selection using copula based mutual information
    Lall, Snehalika
    Sinha, Debajyoti
    Ghosh, Abhik
    Sengupta, Debarka
    Bandyopadhyay, Sanghamitra
    [J]. PATTERN RECOGNITION, 2021, 112
  • [44] Tree-Structured Feature Extraction Using Mutual Information
    Oveisi, Farid
    Oveisi, Shahrzad
    Efranian, Abbas
    Patras, Ioannis
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (01) : 127 - 137
  • [45] Feature selection using a sinusoidal sequence combined with mutual information
    Yuan, Gaoteng
    Lu, Lu
    Zhou, Xiaofeng
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 126
  • [46] An optimal feature selection technique using the concept of mutual information
    Al-Ani, A
    Deriche, M
    [J]. ISSPA 2001: SIXTH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND ITS APPLICATIONS, VOLS 1 AND 2, PROCEEDINGS, 2001, : 477 - 480
  • [47] Feature selection using improved mutual information for text classification
    Novovicová, J
    Malík, A
    Pudil, P
    [J]. STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, PROCEEDINGS, 2004, 3138 : 1010 - 1017
  • [48] Optimal Detection and Tracking of Feature Points using Mutual Information
    Dame, Amaury
    Marchand, Eric
    [J]. 2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6, 2009, : 3601 - 3604
  • [49] Gait recognition using compact feature extraction transforms and depth information
    Ioannidis, Dimosthenis
    Tzovaras, Dimitrios
    Damousis, Ioannis G.
    Argyropoulos, Savvas
    Moustakas, Konstantinos
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2007, 2 (03) : 623 - 630
  • [50] Discriminative linear transforms for feature normalization and speaker adaptation in HMM estimation
    Tsakalidis, S
    Doumpiotis, V
    Byrne, W
    [J]. IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, 2005, 13 (03): : 367 - 376