Gaze strategies during visually-guided versus memory-guided grasping

被引:19
|
作者
Prime, Steven L. [1 ]
Marotta, Jonathan J. [2 ]
机构
[1] Victoria Univ Wellington, Sch Psychol, Wellington 6012, New Zealand
[2] Univ Manitoba, Dept Psychol, Winnipeg, MB R3T 2N2, Canada
基金
加拿大健康研究院; 加拿大自然科学与工程研究理事会;
关键词
Visuomotor control; Delayed reaching; Memory; Sensorimotor; Gaze; EYE-HAND COORDINATION; VISUOMOTOR MEMORY; PARIETAL CORTEX; VISION; INFORMATION; FEEDBACK; DELAY; SIZE; PERCEPTION; PRECISION;
D O I
10.1007/s00221-012-3358-3
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream.
引用
收藏
页码:291 / 305
页数:15
相关论文
共 50 条
  • [21] Gaze allocation during visually guided manipulation
    Varela, J. I. Nunez
    Wyatt, J. L.
    PERCEPTION, 2012, 41 (10) : 1270 - 1270
  • [22] Visually and memory-guided grasping: Aperture shaping exhibits a time-dependent scaling to Weber's law
    Holmes, Scott A.
    Mulla, Ali
    Binsted, Gordon
    Heath, Matthew
    VISION RESEARCH, 2011, 51 (17) : 1941 - 1948
  • [23] A memory and visually-guided saccade paradigm with increased memory load using fMRI
    Fischer, V.
    Raabe, M.
    Bernhardt, D.
    Greenlee, M. W.
    PERCEPTION, 2008, 37 : 67 - 67
  • [24] RECOVERING HEADING FOR VISUALLY-GUIDED NAVIGATION
    HILDRETH, EC
    VISION RESEARCH, 1992, 32 (06) : 1177 - 1192
  • [25] Visually-Guided Adaptive Robot (ViGuAR)
    Livitz, Gennady
    Ames, Heather
    Chandler, Ben
    Gorchetchnikov, Anatoli
    Leveille, Jasmin
    Vasilkoski, Zlatko
    Versace, Massimiliano
    Mingolla, Ennio
    Snider, Greg
    Amerson, Rick
    Carter, Dick
    Abdalla, Hisham
    Qureshi, Muhammad Shakeel
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 2944 - 2951
  • [26] Consciousness and choking in visually-guided actions
    Johan M. Koedijker
    David L. Mann
    Phenomenology and the Cognitive Sciences, 2015, 14 : 333 - 348
  • [27] Kinematics of Visually-Guided Eye Movements
    Hess, Bernhard J. M.
    Thomassen, Jakob S.
    PLOS ONE, 2014, 9 (04):
  • [28] A cross-sectional developmental examination of the SNARC effect in a visually-guided grasping task
    Mills, Kelly J.
    Rousseau, Ben R.
    Gonzalez, Claudia L. R.
    NEUROPSYCHOLOGIA, 2014, 58 (01) : 99 - 106
  • [29] Visually guided object grasping
    Horaud, R
    Dornaika, F
    Espiau, B
    IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1998, 14 (04): : 525 - 532
  • [30] Prediction error in implicit adaptation during visually- and memory-guided reaching tasks
    Numasawa, Kosuke
    Miyamoto, Takeshi
    Kizuka, Tomohiro
    Ono, Seiji
    SCIENTIFIC REPORTS, 2024, 14 (01)