Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene

被引:75
|
作者
Ghinst, Marc Vander [1 ,2 ]
Bourguignon, Mathieu [1 ,3 ,4 ]
de Beeck, Marc Op [1 ]
Wens, Vincent [1 ]
Marty, Brice [1 ]
Hassid, Sergio [2 ]
Choufani, Georges [2 ]
Jousmaki, Veikko [3 ]
Hari, Riitta [3 ]
Van Bogaert, Patrick [1 ]
Goldman, Serge [1 ]
De Tiege, Xavier [1 ]
机构
[1] Univ Libre Bruxelles, Lab Cartographie Fonct Cerveau, UNI ULB Neurosci Inst, 808 Lennik St, B-1070 Brussels, Belgium
[2] Univ Libre Bruxelles, Serv ORL & Chirurg Cerv Faciale, ULB Hop Erasme, B-1070 Brussels, Belgium
[3] Aalto Univ, Sch Sci, Brain Res Unit, Dept Neurosci & Biomed Engn, FI-00076 Espoo, Finland
[4] Basque Ctr Cognit Brain & Language, BCBL, San Sebastian 20009, Spain
来源
JOURNAL OF NEUROSCIENCE | 2016年 / 36卷 / 05期
关键词
coherence analysis; magnetoencephalography; speech in noise; BRAIN-WAVE RECOGNITION; CORTICAL REPRESENTATION; NEURONAL OSCILLATIONS; RESPONSES; NETWORK; CORTEX; PHASE; MEG; IDENTIFICATION; SEPARATION;
D O I
10.1523/JNEUROSCI.1730-15.2016
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at similar to 0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the similar to 0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow similar to 0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene.
引用
收藏
页码:1596 / 1606
页数:11
相关论文
共 50 条
  • [1] SCHIZOPHRENIA AFFECTS SPEECH-INDUCED FUNCTIONAL CONNECTIVITY OF THE SUPERIOR TEMPORAL GYRUS UNDER COCKTAIL-PARTY LISTENING CONDITIONS
    Li, Juanhua
    Wu, Chao
    Zheng, Yingjun
    Li, Ruikeng
    Li, Xuanzi
    She, Shenglin
    Wu, Haibo
    Peng, Hongjun
    Ning, Yuping
    Li, Liang
    [J]. NEUROSCIENCE, 2017, 359 : 248 - 257
  • [2] EVALUATION OF THE COCKTAIL-PARTY EFFECT FOR MULTIPLE SPEECH STIMULI WITHIN A SPATIAL AUDITORY DISPLAY
    CRISPIEN, K
    EHRENBERG, T
    [J]. JOURNAL OF THE AUDIO ENGINEERING SOCIETY, 1995, 43 (11): : 932 - 941
  • [3] Automatic speech recognition in cocktail-party situations: A specific training for separated speech
    Marti, Amparo
    Cobos, Maximo
    Lopez, Jose J.
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2012, 131 (02): : 1529 - 1535
  • [4] Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers
    Brodbeck, Christian
    Jiao, Alex
    Hong, L. Elliot
    Simon, Jonathan Z.
    [J]. PLOS BIOLOGY, 2020, 18 (10)
  • [5] Speaking rhythmically improves speech recognition under "cocktail-party" conditions
    Wang, Mengyuan
    Kong, Lingzhi
    Zhang, Changxin
    Wu, Xihong
    Li, Liang
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2018, 143 (04): : EL255 - EL259
  • [6] Speech recognition by bilateral cochlear implant users in a cocktail-party setting
    Loizou, Philipos C.
    Hu, Yi
    Litovsky, Ruth
    Yu, Gongqiang
    Peters, Robert
    Lake, Jennifer
    Roland, Peter
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2009, 125 (01): : 372 - 383
  • [7] The mutual roles of temporal glimpsing and vocal characteristics in cocktail-party listening
    Vestergaard, Martin D.
    Fyson, Nicholas R. C.
    Patterson, Roy D.
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2011, 130 (01): : 429 - 439
  • [8] Effects of age on electrophysiological correlates of speech processing in a dynamic "cocktail-party" situation
    Getzmann, Stephan
    Hanenberg, Christina
    Lewald, Joerg
    Falkensteinand, Michael
    Wascher, Edmund
    [J]. FRONTIERS IN NEUROSCIENCE, 2015, 9
  • [9] The integration of continuous audio and visual speech in a cocktail-party environment depends on attention
    Ahmed, Farhin
    Nidiffer, Aaron R.
    O'Sullivan, Aisling E.
    Zuk, Nathaniel J.
    Lalor, Edmund C.
    [J]. NEUROIMAGE, 2023, 274
  • [10] Computational scene analysis of cocktail-party situations based on sequential Monte Carlo methods
    Nix, J
    Kleinschmidt, M
    Hohmann, V
    [J]. CONFERENCE RECORD OF THE THIRTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, VOLS 1 AND 2, 2003, : 735 - 739