Feature saliency and feedback information interactively impact visual category learning

被引:11
|
作者
Hammer, Rubi [1 ,2 ,3 ]
Sloutsky, Vladimir [4 ,5 ]
Grill-Spector, Kalanit [1 ,6 ]
机构
[1] Stanford Univ, Dept Psychol, Stanford, CA 94305 USA
[2] Northwestern Univ, Dept Commun Sci & Disorders, Evanston, IL 60208 USA
[3] Northwestern Univ, Interdept Neurosci Program, Evanston, IL 60208 USA
[4] Ohio State Univ, Dept Psychol, Columbus, OH 43210 USA
[5] Ohio State Univ, Ctr Cognit Sci, Columbus, OH 43210 USA
[6] Stanford Univ, Stanford Neurosci Inst, Stanford, CA 94305 USA
来源
FRONTIERS IN PSYCHOLOGY | 2015年 / 6卷
关键词
category learning; categorization; attentional learning; perceptual learning; visual perception; feedback processing; feature saliency; perceptual expertise; SELECTIVE ATTENTION; WORKING-MEMORY; NEURAL BASIS; TOP-DOWN; VENTRAL ATTENTION; SPATIAL ATTENTION; HUMAN BRAIN; CATEGORIZATION; SIMILARITY; MODEL;
D O I
10.3389/fpsyg.2015.00074
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object's features most relevant for categorization, while 'filtering out' irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such lowsaliency mid-information learning scenarios are characterized by a 'cognitive loop paradox' where two interdependent learning processes have to take place simultaneously.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] An information theoretic model of saliency and visual search
    Bruce, Neil D. B.
    Tsotsos, John K.
    ATTENTION IN COGNITIVE SYSTEMS: THEORIES AND SYSTEMS FROM AN INTERDISCIPLINARY VIEWPOINT, 2007, 4840 : 171 - 183
  • [32] RGB-D Visual Saliency Detection Algorithm Based on Information Guided and Multimodal Feature Fusion
    Xu, Lijuan
    Xu, Xuemiao
    IEEE ACCESS, 2024, 12 : 268 - 280
  • [33] Supervised Visual Vocabulary with Category Information
    Liu, Yunqiang
    Caselles, Vicent
    ADVANCED CONCEPTS FOR INTELLIGENT VISION SYSTEMS, 2011, 6915 : 13 - 21
  • [34] Bagging-based saliency distribution learning for visual saliency detection
    Pang, Yu
    Yu, Xiaosheng
    Wu, Yunhe
    Wu, Chengdong
    Jiang, Yang
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2020, 87
  • [35] Delayed feedback effects on rule-based and information-integration category learning
    Maddox, WT
    Ashby, FG
    Bohil, CJ
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 2003, 29 (04) : 650 - 662
  • [36] Modeling the Roles of Category and Feature Information in Inference
    Zivot, Matthew T.
    Cohen, Andrew L.
    EXPERIMENTAL PSYCHOLOGY, 2014, 61 (04) : 285 - 300
  • [37] Joint learning of visual attributes, object classes and visual saliency
    Wang, Gang
    Forsyth, David
    2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2009, : 537 - 544
  • [38] Adaptive learning in a compartmental model of visual cortex-how feedback enables stable category learning and refinement
    Layher, Georg
    Schrodt, Fabian
    Butz, Martin V.
    Neumann, Heiko
    FRONTIERS IN PSYCHOLOGY, 2014, 5
  • [39] FUNNRAR: HYBRID RARITY/LEARNING VISUAL SALIENCY
    Marighetto, P.
    Abdelkader, I. Hadj
    Duzelier, S.
    Decombas, M.
    Riche, N.
    Jakubowicz, J.
    Mancas, M.
    Gosselin, B.
    Laganiere, R.
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 2782 - 2786
  • [40] On the Use of Intrinsic Motivation for Visual Saliency Learning
    Craye, Celine
    Filliat, David
    Goudou, Jean-Francois
    2016 JOINT IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL-EPIROB), 2016, : 158 - 165