Feature saliency and feedback information interactively impact visual category learning

被引:11
|
作者
Hammer, Rubi [1 ,2 ,3 ]
Sloutsky, Vladimir [4 ,5 ]
Grill-Spector, Kalanit [1 ,6 ]
机构
[1] Stanford Univ, Dept Psychol, Stanford, CA 94305 USA
[2] Northwestern Univ, Dept Commun Sci & Disorders, Evanston, IL 60208 USA
[3] Northwestern Univ, Interdept Neurosci Program, Evanston, IL 60208 USA
[4] Ohio State Univ, Dept Psychol, Columbus, OH 43210 USA
[5] Ohio State Univ, Ctr Cognit Sci, Columbus, OH 43210 USA
[6] Stanford Univ, Stanford Neurosci Inst, Stanford, CA 94305 USA
来源
FRONTIERS IN PSYCHOLOGY | 2015年 / 6卷
关键词
category learning; categorization; attentional learning; perceptual learning; visual perception; feedback processing; feature saliency; perceptual expertise; SELECTIVE ATTENTION; WORKING-MEMORY; NEURAL BASIS; TOP-DOWN; VENTRAL ATTENTION; SPATIAL ATTENTION; HUMAN BRAIN; CATEGORIZATION; SIMILARITY; MODEL;
D O I
10.3389/fpsyg.2015.00074
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object's features most relevant for categorization, while 'filtering out' irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such lowsaliency mid-information learning scenarios are characterized by a 'cognitive loop paradox' where two interdependent learning processes have to take place simultaneously.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Impact of feature saliency on visual category learning
    Hammer, Rubi
    FRONTIERS IN PSYCHOLOGY, 2015, 6
  • [2] Semisupervised category learning: The impact of feedback in learning the information-integration task
    Vandist, Katleen
    De Schryver, Maarten
    Rosseel, Yves
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2009, 71 (02) : 328 - 341
  • [3] Semisupervised category learning: The impact of feedback in learning the information-integration task
    Katleen Vandist
    Maarten De Schryver
    Yves Rosseel
    Attention, Perception, & Psychophysics, 2009, 71 : 328 - 341
  • [4] RETRIEVAL OF EXEMPLAR AND FEATURE INFORMATION IN CATEGORY LEARNING
    HURWITZ, JB
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 1994, 20 (04) : 887 - 903
  • [5] Unifying Visual Saliency with HOG Feature Learning for Traffic Sign Detection
    Xie, Yuan
    Liu, Li-Feng
    Li, Cui-Hua
    Qu, Yan-Yun
    2009 IEEE INTELLIGENT VEHICLES SYMPOSIUM, VOLS 1 AND 2, 2009, : 24 - 29
  • [6] Visual Saliency of Character Feature in an Image
    Nagashima, Taira
    Takano, Hironobu
    Nakamura, Kiyomi
    2015 4TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION ICIEV 15, 2015,
  • [7] VISUAL SALIENCY WITH SIDE INFORMATION
    Jiang, Wei
    Xie, Lexing
    Chang, Shih-Fu
    2009 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS 1- 8, PROCEEDINGS, 2009, : 1765 - +
  • [8] Separating Inference from Feature Learning in Deep Unsupervised Visual Saliency Estimation
    Taille, Bruno
    Ortiz, Michael Garcia
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 1195 - 1201
  • [9] Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost
    Zhao, Qi
    Koch, Christof
    JOURNAL OF VISION, 2012, 12 (06):
  • [10] Few-shot learning with saliency maps as additional visual information
    Abdelaziz, Mounir
    Zhang, Zuping
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (07) : 10491 - 10508