Learning visuomotor transformations for gaze-control and grasping

被引:0
|
作者
Heiko Hoffmann
Wolfram Schenck
Ralf Möller
机构
[1] Cognitive Robotics,Department of Psychology, Max Planck Institute for Human Cognitive and Brain Sciences
[2] Bielefeld University,Computer Engineering Group, Faculty of Technology
来源
Biological Cybernetics | 2005年 / 93卷
关键词
Target Position; Combine Model; Density Model; Unsupervised Learning; Motor Command;
D O I
暂无
中图分类号
学科分类号
摘要
For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target’s position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.
引用
收藏
页码:119 / 130
页数:11
相关论文
共 50 条
  • [31] IS ATTENTION NECESSARY FOR VISUOMOTOR TRANSFORMATIONS
    CAREY, DP
    SERVOS, P
    BEHAVIORAL AND BRAIN SCIENCES, 1992, 15 (04) : 723 - 723
  • [32] Direct visuomotor transformations for reaching
    Christopher A. Buneo
    Murray R. Jarvis
    Aaron P. Batista
    Richard A. Andersen
    Nature, 2002, 416 : 632 - 636
  • [33] Self-Learning Transformations for Improving Gaze and Head Redirection
    Zheng, Yufeng
    Park, Seonwook
    Zhang, Xucong
    De Mello, Shalini
    Hilliges, Otmar
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [34] Visuomotor feedback gains are modulated by gaze position
    de Brouwer, Anouk J.
    Gallivan, Jason P.
    Flanagan, J. Randall
    JOURNAL OF NEUROPHYSIOLOGY, 2018, 120 (05) : 2522 - 2531
  • [35] Visuomotor transformation in the fly gaze stabilization system
    Huston, Stephen J.
    Krapp, Holger G.
    PLOS BIOLOGY, 2008, 6 (07): : 1468 - 1478
  • [36] Posterior cortical atrophy: visuomotor deficits in reaching and grasping
    Meek, Benjamin P.
    Shelton, Paul
    Marotta, Jonathan J.
    FRONTIERS IN HUMAN NEUROSCIENCE, 2013, 7
  • [37] Visuomotor properties of grasping neurons of inferior area 6
    Murata, A
    Fadiga, L
    Fogassi, L
    Gallese, V
    Rizzolatti, G
    PFLUGERS ARCHIV-EUROPEAN JOURNAL OF PHYSIOLOGY, 1997, 434 (03): : 73 - 73
  • [38] Neuro-Genetic Visuomotor Architecture for Robotic Grasping
    Kerzel, Matthias
    Spisak, Josua
    Strahl, Erik
    Wermter, Stefan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 533 - 545
  • [39] Effect of Grasping Uniformity on Estimation of Grasping Region from Gaze Data
    Witchawanitchanun, Pimwalun
    Yucel, Zeynep
    Monden, Akito
    Leelaprute, Pattara
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION (HAI'19), 2019, : 265 - 267
  • [40] Enhancing a Neurocognitive Shared Visuomotor Model for Object Identification, Localization, and Grasping With Learning From Auxiliary Tasks
    Kerzel, Matthias
    Abawi, Fares
    Eppe, Manfred
    Wermter, Stefan
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (04) : 1331 - 1343