LCMV BEAMFORMING WITH SUBSPACE PROJECTION FOR MULTI-SPEAKER SPEECH ENHANCEMENT

被引:0
|
作者
Hassani, Amin [1 ]
Bertrand, Alexander [1 ]
Moonen, Marc [1 ]
机构
[1] Katholieke Univ Leuven, Dept Elect Engn ESAT, Stadius Ctr Dynam Syst, Signal Proc & Data Analyt, Kasteelpk Arenberg 10, B-3001 Leuven, Belgium
关键词
LCMV beamforming; generalized eigen-value decomposition; subspace estimation; speech enhancement; noise reduction;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
The linearly constrained minimum variance (LCMV) beam-former has been widely employed to extract (a mixture of) multiple desired speech signals from a collection of microphone signals, which are also polluted by other interfering speech signals and noise components. In many practical applications, the LCMV beamformer requires that the subspace corresponding to the desired and interferer signals is either known, or estimated by means of a data-driven procedure, e.g., using a generalized eigenvalue decomposition (GEVD). In practice, however, it often occurs that insufficient relevant samples are available to accurately estimate these subspaces, leading to a beamformer with poor output performance. In this paper we propose a subspace projection-based approach to improve the performance of the LCMV beamformer by exploiting the available data more efficiently. The improved performance achieved by this approach is demonstrated by means of simulation results.
引用
下载
收藏
页码:91 / 95
页数:5
相关论文
共 50 条
  • [21] A unified network for multi-speaker speech recognition with multi-channel recordings
    Liu, Conggui
    Inoue, Nakamasa
    Shinoda, Koichi
    2017 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC 2017), 2017, : 1304 - 1307
  • [22] Lightweight, Multi-Speaker, Multi-Lingual Indic Text-to-Speech
    Singh, Abhayjeet
    Nagireddi, Amala
    Jayakumar, Anjali
    Deekshitha, G.
    Bandekar, Jesuraja
    Roopa, R.
    Badiger, Sandhya
    Udupa, Sathvik
    Kumar, Saurabh
    Ghosh, Prasanta Kumar
    Murthy, Hema A.
    Zen, Heiga
    Kumar, Pranaw
    Kant, Kamal
    Bole, Amol
    Singh, Bira Chandra
    Tokuda, Keiichi
    Hasegawa-Johnson, Mark
    Olbrich, Philipp
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2024, 5 : 790 - 798
  • [23] A Multi-channel/Multi-speaker Articulatory Database in Mandarin for Speech Visualization
    Zhang, Dan
    Liu, Xianqian
    Yan, Nan
    Wang, Lan
    Zhu, Yun
    Chen, Hui
    2014 9TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2014, : 299 - +
  • [24] Deep Gaussian process based multi-speaker speech synthesis with latent speaker representation
    Mitsui, Kentaro
    Koriyama, Tomoki
    Saruwatari, Hiroshi
    SPEECH COMMUNICATION, 2021, 132 : 132 - 145
  • [25] DNN based multi-speaker speech synthesis with temporal auxiliary speaker ID embedding
    Lee, Junmo
    Song, Kwangsub
    Noh, Kyoungjin
    Park, Tae-Jun
    Chang, Joon-Hyuk
    2019 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2019, : 61 - 64
  • [26] Training Multi-Speaker Neural Text-to-Speech Systems using Speaker-Imbalanced Speech Corpora
    Luong, Hieu-Thi
    Wang, Xin
    Yamagishi, Junichi
    Nishizawa, Nobuyuki
    INTERSPEECH 2019, 2019, : 1303 - 1307
  • [27] Phoneme Duration Modeling Using Speech Rhythm-Based Speaker Embeddings for Multi-Speaker Speech Synthesis
    Fujita, Kenichi
    Ando, Atsushi
    Ijima, Yusuke
    INTERSPEECH 2021, 2021, : 3141 - 3145
  • [28] Signal Subspace Speech Enhancement with Oblique Projection and Normalization
    Surendran, Sudeep
    Kumar, T. Kishore
    RADIOENGINEERING, 2017, 26 (04) : 1161 - 1168
  • [29] CLeLfPC: a Large Open Multi-Speaker Corpus of French Cued Speech
    Bigi, Brigitte
    Zimmermann, Maryvonne
    Andre, Carine
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 987 - 994
  • [30] Lip2Speech: Lightweight Multi-Speaker Speech Reconstruction with Gabor Features
    Dong, Zhongping
    Xu, Yan
    Abel, Andrew
    Wang, Dong
    APPLIED SCIENCES-BASEL, 2024, 14 (02):