An Electroglottograph Auxiliary Neural Network for Target Speaker Extraction

被引:3
|
作者
Chen, Lijiang [1 ]
Mo, Zhendong [1 ]
Ren, Jie [1 ]
Cui, Chunfeng [1 ]
Zhao, Qi [1 ]
机构
[1] Beihang Univ, Sch Elect & Informat Engn, 37 Xueyuan Rd, Beijing 100191, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 01期
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
speech extraction; SpeakerBeam; electroglottograph; pre-processing; SPEECH;
D O I
10.3390/app13010469
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The extraction of a target speaker from mixtures of different speakers has attracted extensive amounts of attention and research. Previous studies have proposed several methods, such as SpeakerBeam, to tackle this speech extraction problem using clean speech from the target speaker to provide information. However, clean speech cannot be obtained immediately in most cases. In this study, we addressed this problem by extracting features from the electroglottographs (EGGs) of target speakers. An EGG is a laryngeal function detection technology that can detect the impedance and condition of vocal cords. Since EGGs have excellent anti-noise performance due to the collection method, they can be obtained in rather noisy environments. In order to obtain clean speech from target speakers out of the mixtures of different speakers, we utilized deep learning methods and used EGG signals as additional information to extract target speaker. In this way, we could extract target speaker from mixtures of different speakers without needing clean speech from the target speakers. According to the characteristics of the EGG signals, we developed an EGG_auxiliary network to train a speaker extraction model under the assumption that EGG signals carry information about speech signals. Additionally, we took the correlations between EGGs and speech signals in silent and unvoiced segments into consideration to develop a new network involving EGG preprocessing. We achieved improvements in the scale invariant signal-to-distortion ratio improvement (SISDRi) of 0.89 dB on the Chinese Dual-Mode Emotional Speech Database (CDESD) and 1.41 dB on the EMO-DB dataset. In addition, our methods solved the problem of poor performance with target speakers of the same gender and the different between the same gender situation and the problem of greatly reduced precision under the low SNR circumstances.
引用
下载
收藏
页数:19
相关论文
共 50 条
  • [41] TIME-DOMAIN SPEAKER EXTRACTION NETWORK
    Xu, Chenglin
    Rao, Wei
    Chng, Eng Siong
    Li, Haizhou
    2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), 2019, : 327 - 334
  • [42] Capture inter-speaker information with a neural network for speaker identification
    Wang, L
    Chen, K
    Chi, HH
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL V, 2000, : 247 - 252
  • [43] Streaming Target-Speaker ASR with Neural Transducer
    Moriya, Takafumi
    Sato, Hiroshi
    Ochiai, Tsubasa
    Delcroix, Marc
    Shinozak, Takahiro
    INTERSPEECH 2022, 2022, : 2673 - 2677
  • [44] Target Speech Extraction: Independent Vector Extraction Guided by Supervised Speaker Identification
    Malek, Jiri
    Jansky, Jakub
    Koldovsky, Zbynek
    Kounovsky, Tomas
    Cmejla, Jaroslav
    Zdansky, Jindrich
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 2295 - 2309
  • [45] ARTIFICIAL NEURAL NETWORK FEATURES FOR SPEAKER DIARIZATION
    Yella, Harsha
    Stolcke, Andreas
    Slaney, Malcolm
    2014 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY SLT 2014, 2014, : 402 - 406
  • [46] A Deep Neural Network Model for Speaker Identification
    Ye, Feng
    Yang, Jun
    APPLIED SCIENCES-BASEL, 2021, 11 (08):
  • [47] Convolutional neural network vectors for speaker recognition
    Hourri, Soufiane
    Nikolov, Nikola S.
    Kharroubi, Jamal
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2021, 24 (02) : 389 - 400
  • [48] Convolutional neural network vectors for speaker recognition
    Soufiane Hourri
    Nikola S. Nikolov
    Jamal Kharroubi
    International Journal of Speech Technology, 2021, 24 : 389 - 400
  • [49] SEQUENCE SUMMARIZING NEURAL NETWORK FOR SPEAKER ADAPTATION
    Vesely, Karel
    Watanabe, Shinji
    Zmolikova, Katerina
    Karafiat, Martin
    Burget, Lukas
    Cernocky, Jan Honza
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 5315 - 5319
  • [50] Speaker Recognition Based on Quantum Neural Network
    Wang, Geng
    Wang, Jin Ming
    Sun, Jian
    2ND INTERNATIONAL SYMPOSIUM ON COMPUTER NETWORK AND MULTIMEDIA TECHNOLOGY (CNMT 2010), VOLS 1 AND 2, 2010, : 238 - 241