Rhythmicity and cross-modal temporal cues facilitate detection

被引:60
|
作者
ten Oever, Sanne [1 ]
Schroeder, Charles E. [2 ,3 ,4 ]
Poeppel, David [5 ]
van Atteveldt, Nienke [1 ,6 ,7 ]
Zion-Golumbic, Elana [2 ,3 ,4 ,8 ]
机构
[1] Maastricht Univ, Fac Psychol & Neurosci, NL-6200 MD Maastricht, Netherlands
[2] Columbia Univ, Med Ctr, Dept Psychiat, New York, NY 10032 USA
[3] Columbia Univ, Med Ctr, Dept Neurol, New York, NY 10032 USA
[4] Nathan S Kline Inst Psychiat Res, Orangeburg, NY 10962 USA
[5] NYU, Dept Psychol, New York, NY 10003 USA
[6] Vrije Univ Amsterdam, Fac Psychol & Educ, Dept Educ Neurosci, Amsterdam, Netherlands
[7] Vrije Univ Amsterdam, Inst Learn, Amsterdam, Netherlands
[8] Bar Ilan Univ, Gonda Brain Res Ctr, Ramat Gan, Israel
关键词
Temporal prediction; Audiovisual integration; Rhythmicity; Detection; LOW-FREQUENCY OSCILLATIONS; NEURONAL OSCILLATIONS; MULTISENSORY INTEGRATION; AUDITORY DETECTION; VISUAL SPEECH; ATTENTION; DYNAMICS; RECALIBRATION; ENTRAINMENT; COMBINATION;
D O I
10.1016/j.neuropsychologia.2014.08.008
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally. (C) 2014 Elsevier Ltd. All rights reserved.
引用
收藏
页码:43 / 50
页数:8
相关论文
共 50 条
  • [41] Chemosensory cross-modal stroop effects: Congruent odors facilitate taste identification
    White, Theresa L.
    Prescott, John
    CHEMICAL SENSES, 2007, 32 (04) : 337 - 341
  • [42] Multi-Modal Sarcasm Detection with Interactive In-Modal and Cross-Modal Graphs
    Liang, Bin
    Lou, Chenwei
    Li, Xiang
    Gui, Lin
    Yang, Min
    Xu, Ruifeng
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4707 - 4715
  • [43] Are odors the best cues to memory? A cross-modal comparison of associative memory stimuli
    Herz, RS
    OLFACTION AND TASTE XII: AN INTERNATIONAL SYMPOSIUM, 1998, 855 : 670 - 674
  • [44] Cross-Modal Association between Hand-Feel Touch and Taste Cues
    Seo, Han-Seok
    Pramudya, Ragita
    CHEMICAL SENSES, 2019, 44 (07) : E11 - E11
  • [45] Cross-modal plasticity
    不详
    TRENDS IN COGNITIVE SCIENCES, 1997, 1 (07) : 251 - 251
  • [46] Cross-modal perception
    Zydlewska, Agnieszka
    Grabowska, Anna
    NEUROPSYCHIATRIA I NEUROPSYCHOLOGIA, 2011, 6 (02): : 60 - 70
  • [47] A cross-modal crowd counting method combining CNN and cross-modal transformer
    Zhang, Shihui
    Wang, Wei
    Zhao, Weibo
    Wang, Lei
    Li, Qunpeng
    IMAGE AND VISION COMPUTING, 2023, 129
  • [48] Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information
    Yang, Pengcheng
    Zhang, Zhihan
    Luo, Fuli
    Li, Lei
    Huang, Chengyang
    Sun, Xu
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 2680 - 2686
  • [49] A semi-supervised cross-modal memory bank for cross-modal retrieval
    Huang, Yingying
    Hu, Bingliang
    Zhang, Yipeng
    Gao, Chi
    Wang, Quan
    NEUROCOMPUTING, 2024, 579
  • [50] Cross-Modal Center Loss for 3D Cross-Modal Retrieval
    Jing, Longlong
    Vahdani, Elahe
    Tan, Jiaxing
    Tian, Yingli
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3141 - 3150