Human EEG and Recurrent Neural Networks Exhibit Common Temporal Dynamics During Speech Recognition

被引:4
|
作者
Hashemnia, Saeedeh [1 ]
Grasse, Lukas [1 ]
Soni, Shweta [1 ]
Tata, Matthew S. [1 ]
机构
[1] Univ Lethbridge, Dept Neurosci, Canadian Ctr Behav Neurosci, Lethbridge, AB, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
EEG; artificial neural network; speech tracking; auditory; theta; recurrent; RNN; PHASE PATTERNS; OSCILLATIONS; COMPREHENSION; ENTRAINMENT; RESPONSES; DEPENDS; DELTA; THETA;
D O I
10.3389/fnsys.2021.617605
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Recent deep-learning artificial neural networks have shown remarkable success in recognizing natural human speech, however the reasons for their success are not entirely understood. Success of these methods might be because state-of-the-art networks use recurrent layers or dilated convolutional layers that enable the network to use a time-dependent feature space. The importance of time-dependent features in human cortical mechanisms of speech perception, measured by electroencephalography (EEG) and magnetoencephalography (MEG), have also been of particular recent interest. It is possible that recurrent neural networks (RNNs) achieve their success by emulating aspects of cortical dynamics, albeit through very different computational mechanisms. In that case, we should observe commonalities in the temporal dynamics of deep-learning models, particularly in recurrent layers, and brain electrical activity (EEG) during speech perception. We explored this prediction by presenting the same sentences to both human listeners and the Deep Speech RNN and considered the temporal dynamics of the EEG and RNN units for identical sentences. We tested whether the recently discovered phenomenon of envelope phase tracking in the human EEG is also evident in RNN hidden layers. We furthermore predicted that the clustering of dissimilarity between model representations of pairs of stimuli would be similar in both RNN and EEG dynamics. We found that the dynamics of both the recurrent layer of the network and human EEG signals exhibit envelope phase tracking with similar time lags. We also computed the representational distance matrices (RDMs) of brain and network responses to speech stimuli. The model RDMs became more similar to the brain RDM when going from early network layers to later ones, and eventually peaked at the recurrent layer. These results suggest that the Deep Speech RNN captures a representation of temporal features of speech in a manner similar to human brain.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Continuous mandarin speech recognition using hierarchical recurrent neural networks
    Liao, YF
    Chen, WY
    Chen, SH
    1996 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, CONFERENCE PROCEEDINGS, VOLS 1-6, 1996, : 3370 - 3373
  • [32] SPEECH RECOGNITION WITH PREDICTION-ADAPTATION-CORRECTION RECURRENT NEURAL NETWORKS
    Zhang, Yu
    Yu, Dong
    Seltzer, Michael L.
    Droppo, Jasha
    2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP), 2015, : 5004 - 5008
  • [33] Towards End-to-End Speech Recognition with Recurrent Neural Networks
    Graves, Alex
    Jaitly, Navdeep
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 1764 - 1772
  • [34] Deep Recurrent Neural Networks for Human Activity Recognition
    Murad, Abdulmajid
    Pyun, Jae-Young
    SENSORS, 2017, 17 (11)
  • [35] Human Activity Recognition Using Recurrent Neural Networks
    Singh, Deepika
    Merdivan, Erinc
    Psychoula, Ismini
    Kropf, Johannes
    Hanke, Sten
    Geist, Matthieu
    Holzinger, Andreas
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2017, 2017, 10410 : 267 - 274
  • [36] EEG-based emotion recognition with cascaded convolutional recurrent neural networks
    Meng, Ming
    Zhang, Yu
    Ma, Yuliang
    Gao, Yunyuan
    Kong, Wanzeng
    PATTERN ANALYSIS AND APPLICATIONS, 2023, 26 (02) : 783 - 795
  • [37] EEG-based emotion recognition with cascaded convolutional recurrent neural networks
    Ming Meng
    Yu Zhang
    Yuliang Ma
    Yunyuan Gao
    Wanzeng Kong
    Pattern Analysis and Applications, 2023, 26 : 783 - 795
  • [38] Single Channel Speech Enhancement Using Temporal Convolutional Recurrent Neural Networks
    Li, Jingdong
    Zhang, Hui
    Zhang, Xueliang
    Li, Changliang
    2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2019, : 896 - 900
  • [39] Dynamic programming prediction errors of recurrent neural fuzzy networks for speech recognition
    Juang, Chia-Feng
    Lai, Chun-Lung
    Tu, Chiu-Chuan
    EXPERT SYSTEMS WITH APPLICATIONS, 2009, 36 (03) : 6368 - 6374
  • [40] Recent advances in conversational speech recognition using conventional and recurrent neural networks
    Saon, G.
    Picheny, M.
    IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2017, 61 (4-5)