Fusion Feature Extraction Based on Auditory and Energy for Noise-Robust Speech Recognition

被引:8
|
作者
Shi, Yanyan [1 ]
Bai, Jing [1 ]
Xue, Peiyun [1 ]
Shi, Dianxi [2 ,3 ]
机构
[1] Taiyuan Univ Technol, Coll Informat & Comp, Taiyuan 030024, Shanxi, Peoples R China
[2] NIIDT, AIRC, Beijing 100071, Peoples R China
[3] TAIIC, Tianjin 300457, Peoples R China
来源
IEEE ACCESS | 2019年 / 7卷
关键词
Cochlear filter cepstral coefficients; Teager energy operators cepstral coefficients; principal component analysis; speech recognition;
D O I
10.1109/ACCESS.2019.2918147
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Environmental noise can pose a threat to the stable operation of current speech recognition systems. It is therefore essential to develop a front feature set that is able to identify speech under low signal-to-noise ratio. In this paper, a robust fusion feature is proposed that can fully characterize speech information. To obtain the cochlear filter cepstral coefficients (CFCC), a novel feature is first extracted by the power-law nonlinear function, which can simulate the auditory characteristics of the human ear. Speech enhancement technology is then introduced into the front end of feature extraction, and the extracted feature and their first-order difference are combined in new mixed features. An energy feature Teager energy operator cepstral coefficient (TEOCC) is also extracted, and combined with the above-mentioned mixed features to form the fusion feature sets. Principal component analysis (PCA) is then applied to feature selection and optimization of the feature set, and the final feature set is used in a non-specific persons, isolated words, and small-vocabulary speech recognition system. Finally, a comparative experiment of speech recognition is designed to verify the advantages of the proposed feature set using a support vector machine (SVM). The experimental results show that the proposed feature set not only display a high recognition rate and excellent anti-noise performance in speech recognition, but can also fully characterize the auditory and energy information in the speech signals.
引用
收藏
页码:81911 / 81922
页数:12
相关论文
共 50 条
  • [31] Bionic Cepstral coefficients (BCC): A new auditory feature extraction to noise-robust speaker identification
    Zouhir, Youssef
    Zarka, Mohamed
    Ouni, Kais
    [J]. APPLIED ACOUSTICS, 2024, 221
  • [32] Feature extraction for robust speech recognition
    Dharanipragada, S
    [J]. 2002 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOL II, PROCEEDINGS, 2002, : 855 - 858
  • [33] Noise-robust speech feature processing with empirical mode decomposition
    Kuo-Hau Wu
    Chia-Ping Chen
    Bing-Feng Yeh
    [J]. EURASIP Journal on Audio, Speech, and Music Processing, 2011
  • [34] Factorial Speech Processing Models for Noise-Robust Automatic Speech Recognition
    Khademian, Mahdi
    Homayounpour, Mohammad Mehdi
    [J]. 2015 23RD IRANIAN CONFERENCE ON ELECTRICAL ENGINEERING (ICEE), 2015, : 637 - 642
  • [35] Multimodal audiovisual speech recognition architecture using a three-feature multi-fusion method for noise-robust systems
    Jeon, Sanghun
    Lee, Jieun
    Yeo, Dohyeon
    Lee, Yong-Ju
    Kim, Seungjun
    [J]. ETRI JOURNAL, 2024, 46 (01) : 22 - 34
  • [36] Noise-robust speech feature processing with empirical mode decomposition
    Wu, Kuo-Hau
    Chen, Chia-Ping
    Yeh, Bing-Feng
    [J]. EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2011, : 1 - 9
  • [37] Dual-channel VTS feature compensation for noise-robust speech recognition on mobile devices
    Lopez-Espejo, Ivan
    Peinado, Antonio M.
    Gomez, Angel M.
    Gonzalez, Jose A.
    [J]. IET SIGNAL PROCESSING, 2017, 11 (01) : 17 - 25
  • [38] Noise-robust speech recognition in mobile network based on convolution neural networks
    Bouchakour, Lallouani
    Debyeche, Mohamed
    [J]. INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2022, 25 (01) : 269 - 277
  • [39] A Novel Model Characteristics for Noise-Robust Automatic Speech Recognition Based on HMM
    Rafieee, M. Saadeq
    Khazaei, Ali Akbar
    [J]. 2010 IEEE INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, NETWORKING AND INFORMATION SECURITY (WCNIS), VOL 2, 2010, : 215 - 218
  • [40] Cluster-Based Pairwise Contrastive Loss for Noise-Robust Speech Recognition
    Lee, Geon Woo
    Kim, Hong Kook
    [J]. SENSORS, 2024, 24 (08)