Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory

被引:21
|
作者
Coco, Moreno I. [1 ]
Keller, Frank [2 ]
Malcolm, George L. [3 ]
机构
[1] Univ Lisbon, Dept Psychol, Alameda Univ, P-1649013 Lisbon, Portugal
[2] Univ Edinburgh, Sch Informat, Edinburgh, Midlothian, Scotland
[3] George Washington Univ, Dept Psychol, Washington, DC 20052 USA
关键词
Anticipation in language processing; Contextual guidance; Visual world; Blank screen paradigm; Eye-tracking; MEDIATED EYE-MOVEMENTS; LANGUAGE COMPREHENSION; TIME-COURSE; PREDICTION; INTEGRATION; GUIDANCE; INFORMATION; ACTIVATION; MECHANISMS; ATTENTION;
D O I
10.1111/cogs.12313
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices.
引用
收藏
页码:1995 / 2024
页数:30
相关论文
共 50 条
  • [1] DETERMINANTS OF VISUAL ATTENTION IN REAL-WORLD SCENES
    LEWIS, MS
    [J]. PERCEPTUAL AND MOTOR SKILLS, 1975, 41 (02) : 411 - 416
  • [2] Modelling Visual Complexity of Real-World Scenes
    Nagle, Fintan S.
    Lavie, Nilli
    [J]. PERCEPTION, 2019, 48 : 77 - 77
  • [3] Content, not context, facilitates memory for real-world scenes
    Damiano, Claudia
    Walther, Dirk B.
    [J]. VISUAL COGNITION, 2015, 23 (07) : 852 - 855
  • [4] Emotional real-world scenes impact visual search
    Robert C. A. Bendall
    Aisha Mohamed
    Catherine Thompson
    [J]. Cognitive Processing, 2019, 20 : 309 - 316
  • [5] Emotional real-world scenes impact visual search
    Bendall, Robert C. A.
    Mohamed, Aisha
    Thompson, Catherine
    [J]. COGNITIVE PROCESSING, 2019, 20 (03) : 309 - 316
  • [6] The attraction of visual attention to texts in real-world scenes
    Wang, Hsueh-Cheng
    Pomplun, Marc
    [J]. JOURNAL OF VISION, 2012, 12 (06):
  • [7] On the visual span during object search in real-world scenes
    Nuthmann, Antje
    [J]. VISUAL COGNITION, 2013, 21 (07) : 803 - 837
  • [8] Guidance of visual attention by semantic information in real-world scenes
    Wu, Chia-Chien
    Wick, Farahnaz Ahmed
    Pomplun, Marc
    [J]. FRONTIERS IN PSYCHOLOGY, 2014, 5
  • [9] Understanding face detection with visual arrays and real-world scenes
    Nevard, Alice
    Hole, Graham J.
    Prunty, Jonathan E.
    Bindemann, Markus
    [J]. VISUAL COGNITION, 2023, 31 (05) : 390 - 408
  • [10] The role of color in visual search in real-world scenes: Evidence from contextual cuing
    Ehinger, Krista A.
    Brockmole, James R.
    [J]. PERCEPTION & PSYCHOPHYSICS, 2008, 70 (07): : 1366 - 1378