Relating EEG to continuous speech using deep neural networks: a review

被引:20
|
作者
Puffay, Corentin [1 ,2 ]
Accou, Bernd [1 ,2 ]
Bollens, Lies [1 ,2 ]
Monesi, Mohammad Jalilpour [1 ,2 ]
Vanthornhout, Jonas [1 ]
Van Hamme, Hugo [2 ]
Francart, Tom [1 ]
机构
[1] Katholieke Univ Leuven, Dept Neurosci, ExpORL, Leuven, Belgium
[2] Katholieke Univ Leuven, Dept Elect Engn ESAT, PSI, Leuven, Belgium
关键词
EEG; deep learning; speech; auditory neuroscience; ENTRAINMENT; FREQUENCY; BRAIN; LEVEL;
D O I
10.1088/1741-2552/ace73f
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. When a person listens to continuous speech, a corresponding response is elicited in the brain and can be recorded using electroencephalography (EEG). Linear models are presently used to relate the EEG recording to the corresponding speech signal. The ability of linear models to find a mapping between these two signals is used as a measure of neural tracking of speech. Such models are limited as they assume linearity in the EEG-speech relationship, which omits the nonlinear dynamics of the brain. As an alternative, deep learning models have recently been used to relate EEG to continuous speech. Approach. This paper reviews and comments on deep-learning-based studies that relate EEG to continuous speech in single- or multiple-speakers paradigms. We point out recurrent methodological pitfalls and the need for a standard benchmark of model analysis. Main results. We gathered 29 studies. The main methodological issues we found are biased cross-validations, data leakage leading to over-fitted models, or disproportionate data size compared to the model's complexity. In addition, we address requirements for a standard benchmark model analysis, such as public datasets, common evaluation metrics, and good practices for the match-mismatch task. Significance. We present a review paper summarizing the main deep-learning-based studies that relate EEG to speech while addressing methodological pitfalls and important considerations for this newly expanding field. Our study is particularly relevant given the growing application of deep learning in EEG-speech decoding.
引用
收藏
页数:28
相关论文
共 50 条
  • [31] Relating Information Complexity and Training in Deep Neural Networks
    Gain, Alex
    Siegelmann, Hava
    MICRO- AND NANOTECHNOLOGY SENSORS, SYSTEMS, AND APPLICATIONS XI, 2019, 10982
  • [32] NEURAL NETWORKS FOR STATISTICAL RECOGNITION OF CONTINUOUS SPEECH
    MORGAN, N
    BOURLARD, HA
    PROCEEDINGS OF THE IEEE, 1995, 83 (05) : 742 - 770
  • [33] Continuous speech recognition by convolutional neural networks
    Zhang, Qing-Qing
    Liu, Yong
    Pan, Jie-Lin
    Yan, Yong-Hong
    Gongcheng Kexue Xuebao/Chinese Journal of Engineering, 2015, 37 (09): : 1212 - 1217
  • [34] Binaural Classification for Reverberant Speech Segregation Using Deep Neural Networks
    Jiang, Yi
    Wang, DeLiang
    Liu, RunSheng
    Feng, ZhenMing
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2014, 22 (12) : 2112 - 2121
  • [35] Combining Speech Features for Aggression Detection Using Deep Neural Networks
    Jaafar, Noussaiba
    Lachiri, Zied
    2020 5TH INTERNATIONAL CONFERENCE ON ADVANCED TECHNOLOGIES FOR SIGNAL AND IMAGE PROCESSING (ATSIP'2020), 2020,
  • [36] Audio Visual Speech Recognition Using Deep Recurrent Neural Networks
    Thanda, Abhinav
    Venkatesan, Shankar M.
    MULTIMODAL PATTERN RECOGNITION OF SOCIAL SIGNALS IN HUMAN-COMPUTER-INTERACTION, MPRSS 2016, 2017, 10183 : 98 - 109
  • [37] Large Vocabulary Speech Recognition Using Deep Tensor Neural Networks
    Yu, Dong
    Deng, Li
    Seide, Frank
    13TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2012 (INTERSPEECH 2012), VOLS 1-3, 2012, : 6 - 9
  • [38] Audio-Visual Speech Enhancement using Deep Neural Networks
    Hou, Jen-Cheng
    Wang, Syu-Siang
    Lai, Ying-Hui
    Lin, Jen-Chun
    Tsao, Yu
    Chang, Hsiu-Wen
    Wang, Hsin-Min
    2016 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2016,
  • [39] Speech Enhancement for Speaker Recognition Using Deep Recurrent Neural Networks
    Tkachenko, Maxim
    Yamshinin, Alexander
    Lyubimov, Nikolay
    Kotov, Mikhail
    Nastasenko, Marina
    SPEECH AND COMPUTER, SPECOM 2017, 2017, 10458 : 690 - 699
  • [40] Isolated Word Speech Recognition System Using Deep Neural Networks
    Dhanashri, Dhavale
    Dhonde, S. B.
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON DATA ENGINEERING AND COMMUNICATION TECHNOLOGY, ICDECT 2016, VOL 1, 2017, 468 : 9 - 17