Cross-modal links in spatial attention

被引:212
|
作者
Driver, J
Spence, C
机构
[1] UCL, Dept Psychol, Inst Cognit Neurosci, London WC1E 6BT, England
[2] Univ Oxford, Dept Expt Psychol, Oxford OX1 3LD, England
基金
英国惠康基金;
关键词
attention; cross-modal; touch; audition; proprioception; vision;
D O I
10.1098/rstb.1998.0286
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible cross-modal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these cross-modal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way Crossmodal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive cross-modal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.
引用
收藏
页码:1319 / 1331
页数:13
相关论文
共 50 条
  • [31] Mismatch negativity of ERP in cross-modal attention
    Yuejia Luo
    Jinghan Wei
    Science in China Series C: Life Sciences, 1997, 40 : 604 - 612
  • [32] Deep medical cross-modal attention hashing
    Zhang, Yong
    Ou, Weihua
    Shi, Yufeng
    Deng, Jiaxin
    You, Xinge
    Wang, Anzhi
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2022, 25 (04): : 1519 - 1536
  • [33] ERP evidence for cross-modal audiovisual effects of endogenous spatial attention within hemifields
    Eimer, M
    van Velzen, J
    Driver, J
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2004, 16 (02) : 272 - 288
  • [34] Lightweight Cross-Modal Multispectral Pedestrian Detection Based on Spatial Reweighted Attention Mechanism
    Deng, Lujuan
    Fu, Ruochong
    Li, Zuhe
    Liu, Boyi
    Xue, Mengze
    Cui, Yuhao
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 78 (03): : 4071 - 4089
  • [35] Cross-modal links in endogenous spatial attention are mediated by common external locations: evidence from event-related brain potentials
    Eimer, M
    Cockburn, D
    Smedley, B
    Driver, J
    EXPERIMENTAL BRAIN RESEARCH, 2001, 139 (04) : 398 - 411
  • [36] Cross-modal links in endogenous spatial attention are mediated by common external locations: evidence from event-related brain potentials
    Martin Eimer
    Daniel Cockburn
    Ben Smedley
    Jon Driver
    Experimental Brain Research, 2001, 139 : 398 - 411
  • [37] Cross-modal recipe retrieval with stacked attention model
    Jing-Jing Chen
    Lei Pang
    Chong-Wah Ngo
    Multimedia Tools and Applications, 2018, 77 : 29457 - 29473
  • [38] Cross-Modal Attention and Sensory Discrimination Thresholds in Autism
    Haigh, Sarah
    Heeger, David
    Heller, Laurie
    Gupta, Akshat
    Dinstein, Ilan
    Minshew, Nancy
    Behrmann, Marlene
    PERCEPTION, 2016, 45 (06) : 693 - 693
  • [39] Cross-Modal Attention for MRI and Ultrasound Volume Registration
    Song, Xinrui
    Guo, Hengtao
    Xu, Xuanang
    Chao, Hanqing
    Xu, Sheng
    Turkbey, Baris
    Wood, Bradford J.
    Wang, Ge
    Yan, Pingkun
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT IV, 2021, 12904 : 66 - 75
  • [40] Cross-Modal Attention Network for Sign Language Translation
    Gao, Liqing
    Wan, Liang
    Feng, Wei
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 985 - 994