THE VISUAL ENCODING OF TOOL-OBJECT AFFORDANCES

被引:23
|
作者
Natraj, N. [1 ]
Pella, Y. M. [1 ]
Borghi, A. M. [2 ,3 ]
Wheaton, L. A. [1 ]
机构
[1] Georgia Inst Technol, Sch Appl Physiol, Coll Sci, Cognit Motor Control Lab, Atlanta, GA 30332 USA
[2] Univ Bologna, Dept Psychol, I-40126 Bologna, Italy
[3] Italian Natl Res Council, Inst Cognit Sci & Technol, I-00185 Rome, Italy
关键词
affordances; tool; action; eye movement; perception; pattern recognition; EYE-MOVEMENTS; CORTICAL CONTROL; CONSEQUENCES; ACTIVATION; COMPONENTS; ATTENTION; DYNAMICS; SACCADES; SYSTEM; HAND;
D O I
10.1016/j.neuroscience.2015.09.060
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
The perception of tool-object pairs involves understanding their action-relationships (affordances). Here, we sought to evaluate how an observer visually encodes tool-object affordances. Eye-movements were recorded as right-handed participants freely viewed static, right-handed, egocentric tool-object images across three contexts: correct (e.g. hammer-nail), incorrect (e.g. hammer-paper), spatial/ambiguous (e.g. hammer-wood), and three grasp-types: no hand, functional grasp-posture (grasp hammer-handle), non-functional/manipulative grasp-posture (grasp hammer-head). There were three areas of interests (AOI): the object (nail), the operant tool-end (hammer-head), the graspable tool-end (hammer-handle). Participants passively evaluated whether tool-object pairs were functionally correct/incorrect. Clustering of gaze scanpaths and AOI weightings grouped conditions into three distinct grasp-specific clusters, especially across correct and spatial tool-object contexts and to a lesser extent within the incorrect tool-object context. The grasp-specific gaze scanpath clusters were reasonably robust to the temporal order of gaze scanpaths. Gaze was therefore automatically primed to grasp-affordances though the task required evaluating tool-object context. Participants also primarily focused on the object and the operant tool-end and sparsely attended to the graspable tool-end, even in images with functional grasp-postures. In fact, in the absence of a grasp, the object was foveally weighted the most, indicative of a possible object-oriented action priming effect wherein the observer may be evaluating how the tool engages on the object. Unlike the functional grasp-posture, the manipulative grasp-posture caused the greatest disruption in the object-oriented priming effect, ostensibly as it does not afford tool-object action due to its non-functional interaction with the operant tool-end that actually engages with the object (e.g., hammer-head to nail). The enhanced attention towards the manipulative graspposture may serve to encode grasp-intent. Results here shed new light on how an observer gathers action-information when evaluating static tool-object scenes and reveal how contextual and grasp-specific affordances directly modulate visuospatial attention. (C) 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
引用
收藏
页码:512 / 527
页数:16
相关论文
共 50 条
  • [1] Flexible constraint hierarchy during the visual encoding of tool-object interactions
    Bayani, Kristel Yu Tiamco
    Natraj, Nikhilesh
    Gale, Mary Kate
    Temples, Danielle
    Atawala, Neel
    Wheaton, Lewis A.
    [J]. EUROPEAN JOURNAL OF NEUROSCIENCE, 2021, 54 (07) : 6520 - 6532
  • [2] Visual object affordances: Object orientation
    Symes, Ed
    Ellis, Rob
    Tucker, Mike
    [J]. ACTA PSYCHOLOGICA, 2007, 124 (02) : 238 - 255
  • [3] Visual affordances and object selection
    Riddoch, MJ
    Humphreys, GW
    Edwards, MG
    [J]. CONTROL OF COGNITIVE PROCESSES: ATTENTION AND PERFORMANCE XVIII, 2000, : 603 - 625
  • [4] Context and hand posture modulate the neural dynamics of tool-object perception
    Natraj, Nikhilesh
    Poole, Victoria
    Mizelle, J. C.
    Flumini, Andrea
    Borghi, Anna M.
    Wheaton, Lewis A.
    [J]. NEUROPSYCHOLOGIA, 2013, 51 (03) : 506 - 519
  • [5] Mining Semantic Affordances of Visual Object Categories
    Chao, Yu-Wei
    Wang, Zhan
    Mihalcea, Rada
    Deng, Jia
    [J]. 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 4259 - 4267
  • [6] Integrating Object Affordances with Artificial Visual Attention
    Tuennermann, Jan
    Born, Christian
    Mertsching, Baerbel
    [J]. COMPUTER VISION - ECCV 2014 WORKSHOPS, PT II, 2015, 8926 : 427 - 437
  • [7] The role of action affordances in visual object recognition
    Helbig, H
    Graf, M
    Kiefer, M
    [J]. PERCEPTION, 2004, 33 : 75 - 76
  • [8] Neural activation for conceptual identification of correct versus incorrect tool-object pairs
    Mizelle, J. C.
    Wheaton, Lewis A.
    [J]. BRAIN RESEARCH, 2010, 1354 : 100 - 112
  • [9] Learning Intermediate Object Affordances: Towards the Development of a Tool Concept
    Goncalves, Afonso
    Abrantes, Joao
    Saponaro, Giovanni
    Jamone, Lorenzo
    Bernardino, Alexandre
    [J]. FOUTH JOINT IEEE INTERNATIONAL CONFERENCES ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (IEEE ICDL-EPIROB 2014), 2014, : 482 - 488
  • [10] Learning Dexterous Grasping with Object-Centric Visual Affordances
    Mandikal, Priyanka
    Grauman, Kristen
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 6169 - 6176