EEG-based Auditory Attention Decoding: Impact of Reverberation, Noise and Interference Reduction

被引:0
|
作者
Aroudi, Ali [1 ]
Doclo, Simon [1 ]
机构
[1] Carl von Ossietzky Univ Oldenburg, Dept Med Phys & Acoust, Oldenburg, Germany
来源
2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC) | 2017年
关键词
auditory attention decoding; noisy and reverberant signal; speech envelope; noise reduction; dereverberation; EEG signal; brain computer interface; ENHANCEMENT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To identify the attended speaker from single-trial EEG recordings in an acoustic scenario with two competing speakers, an auditory attention decoding (AAD) method has recently been proposed. The AAD method requires the clean speech signals of both the attended and the unattended speaker as reference signals for decoding. However, in practice only the binaural signals, containing several undesired acoustic components (reverberation, background noise and interference), and influenced by anechoic head-related transfer functions (HRTFs), are available. To generate appropriate reference signals for decoding from the binaural signals, it is important to understand the impact of these acoustic components on the AAD performance. In this paper, we investigate this impact for decoding several acoustic conditions (anechoic, reverberant, noisy, and reverberant-noisy) by using simulated speech signals in which different acoustic components have been reduced. The experimental results show that for obtaining a good decoding performance the joint suppression of reverberation, background noise and interference as undesired acoustic components is of great importance.
引用
收藏
页码:3042 / 3047
页数:6
相关论文
共 50 条
  • [21] EEG-Based Auditory Attention Detection via Frequency and Channel Neural Attention
    Cai, Siqi
    Su, Enze
    Xie, Longhan
    Li, Haizhou
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2022, 52 (02) : 256 - 266
  • [22] Robust EEG-Based Decoding of Auditory Attention With High-RMS-Level Speech Segments in Noisy Conditions
    Wang, Lei
    Wu, Ed X.
    Chen, Fei
    FRONTIERS IN HUMAN NEUROSCIENCE, 2020, 14
  • [23] NEURO-STEERED MUSIC SOURCE SEPARATION WITH EEG-BASED AUDITORY ATTENTION DECODING AND CONTRASTIVE-NMF
    Cantisani, Giorgia
    Essid, Slim
    Richard, Gael
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 36 - 40
  • [24] Dynamic Convolution With Multilevel Attention for EEG-Based Motor Imagery Decoding
    Altaheri, Hamdi
    Muhammad, Ghulam
    Alsulaiman, Mansour
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (21): : 18579 - 18588
  • [25] EEG-Based Auditory Attention Detection With Spiking Graph Convolutional Network
    Cai, Siqi
    Zhang, Ran
    Zhang, Malu
    Wu, Jibin
    Li, Haizhou
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (05) : 1698 - 1706
  • [26] A Neural-Inspired Architecture for EEG-Based Auditory Attention Detection
    Cai, Siqi
    Li, Peiwen
    Su, Enze
    Liu, Qi
    Xie, Longhan
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2022, 52 (04) : 668 - 676
  • [27] EEG-based detection of the locus of auditory attention with convolutional neural networks
    Vandecappelle, Servaas
    Deckers, Lucas
    Das, Neetha
    Ansari, Amir Hossein
    Bertrand, Alexander
    Francart, Tom
    ELIFE, 2021, 10
  • [28] Real-time control of a hearing instrument with EEG-based attention decoding
    Hjortkjaer, Jens
    Wong, Daniel D. E.
    Catania, Alessandro
    Marcher-Rorsted, Jonatan
    Ceolini, Enea
    Fuglsang, Soren A.
    Kiselev, Ilya
    Di Liberto, Giovanni
    Liu, Shih-Chii
    Dau, Torsten
    Slaney, Malcolm
    de Cheveigne, Alain
    JOURNAL OF NEURAL ENGINEERING, 2025, 22 (01)
  • [29] The Effects of Noise and Reverberation Time on Auditory Sustained Attention
    Li, Lei
    Liu, Yali
    Li, Ling
    APPLIED SCIENCES-BASEL, 2023, 13 (14):
  • [30] EEG-based Auditory Attention Detection with Spatiotemporal Graph and Graph Convolutional Network
    Wang, Ruicong
    Cai, Siqi
    Li, Haizhou
    INTERSPEECH 2023, 2023, : 1144 - 1148