Audio-visual integration during overt visual attention

被引:0
|
作者
Quigley, Cliodhna [1 ]
Onat, Selim [1 ]
Harding, Sue [2 ]
Cooke, Martin [2 ]
Koenig, Peter [1 ]
机构
[1] Univ Osnabruck, Inst Cognit Sci, D-49069 Osnabruck, Germany
[2] Univ Sheffield, Dept Comp Sci, Speech & Hearing Grp, Sheffield S10 2TN, S Yorkshire, England
来源
JOURNAL OF EYE MOVEMENT RESEARCH | 2007年 / 1卷 / 02期
关键词
D O I
暂无
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
How do different sources of information arising from different modalities interact to control where we look? To answer this question with respect to real-world operational conditions we presented natural images and spatially localized sounds in (V)isual, Audiovisual (AV) and (A)uditory conditions and measured subjects' eye-movements. Our results demonstrate that eye-movements in AV conditions are spatially biased towards the part of the image corresponding to the sound source. Interestingly, this spatial bias is dependent on the probability of a given image region to be fixated (saliency) in the V condition. This indicates that fixation behaviour during the AV conditions is the result of an integration process. Regression analysis shows that this integration is best accounted for by a linear combination of unimodal saliencies.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] An Audio-Visual Attention System for Online Association Learning
    Heckmann, Martin
    Brandl, Holger
    Domont, Xavier
    Bolder, Bram
    Joublin, Frank
    Goerick, Christian
    [J]. INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, VOLS 1-5, 2009, : 2127 - 2130
  • [32] DEEP AUDIO-VISUAL SPEECH SEPARATION WITH ATTENTION MECHANISM
    Li, Chenda
    Qian, Yanmin
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7314 - 7318
  • [33] An audio-visual speech recognition with a new mandarin audio-visual database
    Liao, Wen-Yuan
    Pao, Tsang-Long
    Chen, Yu-Te
    Chang, Tsun-Wei
    [J]. INT CONF ON CYBERNETICS AND INFORMATION TECHNOLOGIES, SYSTEMS AND APPLICATIONS/INT CONF ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 1, 2007, : 19 - +
  • [34] Dual Attention Matching for Audio-Visual Event Localization
    Wu, Yu
    Zhu, Linchao
    Yan, Yan
    Yang, Yi
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6301 - 6309
  • [35] Multi-Attention Audio-Visual Fusion Network for Audio Spatialization
    Zhang, Wen
    Shao, Jie
    [J]. PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 394 - 401
  • [36] Measuring the visual in audio-visual input
    Pujadas, Georgia
    Munoz, Carmen
    [J]. ITL-INTERNATIONAL JOURNAL OF APPLIED LINGUISTICS, 2023, 174 (02) : 263 - 290
  • [37] Audio-visual saliency prediction with multisensory perception and integration
    Xie, Jiawei
    Liu, Zhi
    Li, Gongyang
    Song, Yingjie
    [J]. IMAGE AND VISION COMPUTING, 2024, 143
  • [38] Oscillatory brain dynamics in audio-visual speech integration
    Fingelkurts, AA
    Fingelkurts, AA
    Krause, CM
    Sams, M
    [J]. INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, 2002, 45 (1-2) : 97 - 97
  • [39] The Dynamics of Audio-Visual Integration Capacity as Interference, and SOA
    Wilbiks, Jonathan Michael Paul
    Dyson, Ben J.
    [J]. CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY-REVUE CANADIENNE DE PSYCHOLOGIE EXPERIMENTALE, 2015, 69 (04): : 341 - 341
  • [40] Effects of spatial congruity on audio-visual multimodal integration
    Teder-Sälejärvi, WA
    Di Russo, F
    McDonald, JJ
    Hillyard, SA
    [J]. JOURNAL OF COGNITIVE NEUROSCIENCE, 2005, 17 (09) : 1396 - 1409