Relating EEG to continuous speech using deep neural networks: a review

被引:20
|
作者
Puffay, Corentin [1 ,2 ]
Accou, Bernd [1 ,2 ]
Bollens, Lies [1 ,2 ]
Monesi, Mohammad Jalilpour [1 ,2 ]
Vanthornhout, Jonas [1 ]
Van Hamme, Hugo [2 ]
Francart, Tom [1 ]
机构
[1] Katholieke Univ Leuven, Dept Neurosci, ExpORL, Leuven, Belgium
[2] Katholieke Univ Leuven, Dept Elect Engn ESAT, PSI, Leuven, Belgium
关键词
EEG; deep learning; speech; auditory neuroscience; ENTRAINMENT; FREQUENCY; BRAIN; LEVEL;
D O I
10.1088/1741-2552/ace73f
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. When a person listens to continuous speech, a corresponding response is elicited in the brain and can be recorded using electroencephalography (EEG). Linear models are presently used to relate the EEG recording to the corresponding speech signal. The ability of linear models to find a mapping between these two signals is used as a measure of neural tracking of speech. Such models are limited as they assume linearity in the EEG-speech relationship, which omits the nonlinear dynamics of the brain. As an alternative, deep learning models have recently been used to relate EEG to continuous speech. Approach. This paper reviews and comments on deep-learning-based studies that relate EEG to continuous speech in single- or multiple-speakers paradigms. We point out recurrent methodological pitfalls and the need for a standard benchmark of model analysis. Main results. We gathered 29 studies. The main methodological issues we found are biased cross-validations, data leakage leading to over-fitted models, or disproportionate data size compared to the model's complexity. In addition, we address requirements for a standard benchmark model analysis, such as public datasets, common evaluation metrics, and good practices for the match-mismatch task. Significance. We present a review paper summarizing the main deep-learning-based studies that relate EEG to speech while addressing methodological pitfalls and important considerations for this newly expanding field. Our study is particularly relevant given the growing application of deep learning in EEG-speech decoding.
引用
收藏
页数:28
相关论文
共 50 条
  • [1] Decoding Envelope and Frequency-Following EEG Responses to Continuous Speech Using Deep Neural Networks
    Thornton, Mike D.
    Mandic, Danilo P.
    Reichenbach, Tobias J.
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2024, 5 : 700 - 716
  • [2] Deep Recurrent Neural Networks in Speech Synthesis Using a Continuous Vocoder
    Al-Radhi, Mohammed Salah
    Csapo, Tamas Gabor
    Nemeth, Geza
    SPEECH AND COMPUTER, SPECOM 2017, 2017, 10458 : 282 - 291
  • [3] Speech Recognition Using Deep Neural Networks: A Systematic Review
    Nassif, Ali Bou
    Shahin, Ismail
    Attili, Imtinan
    Azzeh, Mohammad
    Shaalan, Khaled
    IEEE ACCESS, 2019, 7 : 19143 - 19165
  • [4] Predicting EEG Responses to Attended Speech via Deep Neural Networks for Speech
    Alickovic, Emina
    Dorszewski, Tobias
    Christiansen, Thomas U.
    Eskelund, Kasper
    Gizzi, Leonardo
    Skoglund, Martin A.
    Wendt, Dorothea
    2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC, 2023,
  • [5] Speech watermarking using Deep Neural Networks
    Pavlovic, Kosta
    Kovacevic, Slavko
    Durovic, Igor
    2020 28TH TELECOMMUNICATIONS FORUM (TELFOR), 2020, : 292 - 295
  • [6] EEG Classification of Covert Speech Using Regularized Neural Networks
    Sereshkeh, Alborz Rezazadeh
    Trott, Robert
    Bricout, Aurelien
    Chau, Tom
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2017, 25 (12) : 2292 - 2300
  • [7] Toward growing modular deep neural networks for continuous speech recognition
    Ansari, Zohreh
    Seyyedsalehi, Seyyed Ali
    NEURAL COMPUTING & APPLICATIONS, 2017, 28 : S1177 - S1196
  • [8] Toward growing modular deep neural networks for continuous speech recognition
    Zohreh Ansari
    Seyyed Ali Seyyedsalehi
    Neural Computing and Applications, 2017, 28 : 1177 - 1196
  • [9] Speech Activity Detection Using Deep Neural Networks
    Shahsavari, Sajad
    Sameti, Hossein
    Hadian, Hossein
    2017 25TH IRANIAN CONFERENCE ON ELECTRICAL ENGINEERING (ICEE), 2017, : 1564 - 1568
  • [10] SPEECH ENHANCEMENT USING MULTIPLE DEEP NEURAL NETWORKS
    Karjol, Pavan
    Kumar, Ajay M.
    Ghosh, Prasanta Kumar
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5049 - 5053