How Visual and Semantic Information Influence Learning in Familiar Contexts

被引:13
|
作者
Goujon, Annabelle [1 ,2 ]
Brockmole, James R. [3 ]
Ehinger, Krista A. [4 ]
机构
[1] LPC CNRS, F-13331 Marseille 3, France
[2] Univ Provence, Ctr St Charles, F-13331 Marseille 3, France
[3] Univ Notre Dame, Notre Dame, IN 46556 USA
[4] MIT, Cambridge, MA 02139 USA
关键词
contextual cuing; semantic memory; visual complexity; eye movements; color; REAL-WORLD SCENES; GLOBAL FEATURES; INSTANCE THEORY; ATTENTION; IMPLICIT; SEARCH; MEMORY; RECOGNITION; GUIDANCE; CONFIGURATION;
D O I
10.1037/a0028126
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Previous research using the contextual cuing paradigm has revealed both quantitative and qualitative differences in learning depending on whether repeated contexts are defined by letter arrays or real-world scenes. To clarify the relative contributions of visual features and semantic information likely to account for such differences, the typical contextual cuing procedure was adapted to use meaningless but nevertheless visually complex images. The data in reaction time and in eye movements show that, like scenes, such repeated contexts can trigger large, stable, and explicit cuing effects, and that those effects result from facilitated attentional guidance. Like simpler stimulus arrays, however, those effects were impaired by a sudden change of a repeating image's color scheme at the end of the learning phase (Experiment 1), or when the repeated images were presented in a different and unique color scheme across each presentation (Experiment 2). In both cases, search was driven by explicit memory. Collectively, these results suggest that semantic information is not required for conscious awareness of context-target covariation, but it plays a primary role in overcoming variability in specific features within familiar displays.
引用
收藏
页码:1315 / 1327
页数:13
相关论文
共 50 条
  • [1] Unfamiliar Contexts Compared to Familiar Contexts Impair Learning in Humans
    Asfestani, Marjan Alizadeh
    Nagel, Juliane
    Beer, Sina
    Nikpourian, Ghazaleh
    Born, Jan
    Feld, Gordon B.
    [J]. COLLABRA-PSYCHOLOGY, 2023, 9 (01)
  • [2] Visual search and eye movements in novel and familiar contexts
    McDermott, Kyle
    Mulligan, Jeffrey B.
    Bebis, George
    Webster, Michael A.
    [J]. HUMAN VISION AND ELECTRONIC IMAGING XI, 2006, 6057
  • [3] How emotion is learned: Semantic learning of novel words in emotional contexts
    Snefjella, Bryor
    Lana, Nadia
    Kuperman, Victor
    [J]. JOURNAL OF MEMORY AND LANGUAGE, 2020, 115
  • [4] Visual Word Disambiguation by Semantic Contexts
    Su, Yu
    Jurie, Frederic
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2011, : 311 - 318
  • [5] Shared neural codes for visual and semantic information about familiar faces in a common representational space
    Castello, Matteo Visconti di Oleggio
    Haxby, James, V
    Gobbini, M. Ida
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (45)
  • [6] Visual Statistical Learning Based on the Perceptual and Semantic Information of Objects
    Otsuka, Sachio
    Nishiyama, Megumi
    Nakahara, Fumitaka
    Kawaguchi, Jun
    [J]. JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 2013, 39 (01) : 196 - 207
  • [7] Context counts: How learners' contexts influence learning in a MOOC
    Hood, Nina
    Littlejohn, Allison
    Milligan, Colin
    [J]. COMPUTERS & EDUCATION, 2015, 91 : 83 - 91
  • [8] How familiar characters influence children's judgments about information and products
    Danovitch, Judith H.
    Mills, Candice M.
    [J]. JOURNAL OF EXPERIMENTAL CHILD PSYCHOLOGY, 2014, 128 : 1 - 20
  • [9] Contexts for Concepts: Information Modeling for Semantic Interoperability
    Luttighuis, Paul Oude
    Stap, Roel
    Quartel, Dick
    [J]. ENTERPRISE INTEROPERABILITY, 2011, 76 : 146 - +
  • [10] Collaborative Drama-Based EFL Learning in Familiar Contexts
    Zhang, Hao
    Hwang, Wu-Yuin
    Tseng, Shih-Ying
    Chen, Holly S. L.
    [J]. JOURNAL OF EDUCATIONAL COMPUTING RESEARCH, 2019, 57 (03) : 697 - 722