Phase Processing for Single-Channel Speech Enhancement

被引:176
|
作者
Gerkmann, Timo [1 ,2 ,3 ]
Krawczyk-Becker, Martin [1 ]
Le Roux, Jonathan [4 ,5 ]
机构
[1] Siemens Corp Res, Princeton, NJ USA
[2] Royal Inst Technol, Stockholm, Sweden
[3] Carl von Ossietzky Univ Oldenburg, D-26111 Oldenburg, Germany
[4] Mitsubishi Elect Res Labs, Cambridge, MA USA
[5] Nippon Telegraph & Tel Commun Sci Labs, Kyoto, Japan
关键词
SPECTRAL MAGNITUDE ESTIMATION; TIME FOURIER-TRANSFORM; SIGNAL ESTIMATION; VOCODER; AUDIO;
D O I
10.1109/MSP.2014.2369251
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the advancement of technology, both assisted listening devices and speech communication devices are becoming more portable and also more frequently used. As a consequence, users of devices such as hearing aids, cochlear implants, and mobile telephones, expect their devices to work robustly anywhere and at any time. This holds in particular for challenging noisy environments like a cafeteria, a restaurant, a subway, a factory, or in traffic. One way to making assisted listening devices robust to noise is to apply speech enhancement algorithms. To improve the corrupted speech, spatial diversity can be exploited by a constructive combination of microphone signals (so-called beamforming), and by exploiting the different spectro-temporal properties of speech and noise. Here, we focus on single-channel speech enhancement algorithms which rely on spectrotemporal properties. On the one hand, these algorithms can be employed when the miniaturization of devices only allows for using a single microphone. On the other hand, when multiple microphones are available, single-channel algorithms can be employed as a postprocessor at the output of a beamformer. To exploit the short-term stationary properties of natural sounds, many of these approaches process the signal in a time-frequency representation, most frequently the short-time discrete Fourier transform (STFT) domain. In this domain, the coefficients of the signal are complex-valued, and can therefore be represented by their absolute value (referred to in the literature both as STFT magnitude and STFT amplitude) and their phase. While the modeling and processing of the STFT magnitude has been the center of interest in the past three decades, phase has been largely ignored. In this article, we review the role of phase processing for speech enhancement in the context of assisted listening and speech communication devices. We explain why most of the research conducted in this field used to focus on estimating spectral magnitudes in the STFT domain, and why recently phase processing is attracting increasing interest in the speech enhancement community. Furthermore, we review both early and recent methods for phase processing in speech enhancement. We aim to show that phase processing is an exciting field of research with the potential to make assisted listening and speech communication devices more robust in acoustically challenging environments.
引用
收藏
页码:55 / 66
页数:12
相关论文
共 50 条
  • [31] Adaptive recurrent nonnegative matrix factorization with phase compensation for Single-Channel speech enhancement
    Vanita Raj Tank
    Shrinivas Padmakar Mahajan
    [J]. Multimedia Tools and Applications, 2022, 81 : 28249 - 28294
  • [32] Single-channel Speech Enhancement Using Graph Fourier Transform
    Zhang, Chenhui
    Pan, Xiang
    [J]. INTERSPEECH 2022, 2022, : 946 - 950
  • [33] Hybrid quality measures for single-channel speech enhancement algorithms
    Dreiseitel, P
    [J]. EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, 2002, 13 (02): : 159 - 165
  • [34] Single-channel multiple regression for in-car speech enhancement
    Li, WF
    Itou, K
    Takeda, K
    Itakura, F
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2006, E89D (03) : 1032 - 1039
  • [35] A two-stage method for single-channel speech enhancement
    Hamid, ME
    Fukabayashi, T
    [J]. IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2006, E89A (04) : 1058 - 1068
  • [36] Combine Waveform and Spectral Methods for Single-channel Speech Enhancement
    Li, Miao
    Zhang, Hui
    Zhang, Xueliang
    [J]. PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 47 - 52
  • [37] Deep Learning Models for Single-Channel Speech Enhancement on Drones
    Mukhutdinov, Dmitrii
    Alex, Ashish
    Cavallaro, Andrea
    Wang, Lin
    [J]. IEEE ACCESS, 2023, 11 : 22993 - 23007
  • [38] Single-channel speech enhancement using learnable loss mixup
    Chang, Oscar
    Tran, Dung N.
    Koishida, Kazuhito
    [J]. INTERSPEECH 2021, 2021, : 2696 - 2700
  • [39] Single-channel speech enhancement based on frequency domain ALE
    Nakanishi, Isao
    Nagata, Yuudai
    Itoh, Yoshio
    Fukui, Yutaka
    [J]. 2006 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-11, PROCEEDINGS, 2006, : 2541 - 2544
  • [40] Deep Neural Network for Supervised Single-Channel Speech Enhancement
    Saleem, Nasir
    Irfan Khattak, Muhammad
    Ali, Muhammad Yousaf
    Shafi, Muhammad
    [J]. ARCHIVES OF ACOUSTICS, 2019, 44 (01) : 3 - 12