Learning visuomotor transformations for gaze-control and grasping

被引:0
|
作者
Heiko Hoffmann
Wolfram Schenck
Ralf Möller
机构
[1] Cognitive Robotics,Department of Psychology, Max Planck Institute for Human Cognitive and Brain Sciences
[2] Bielefeld University,Computer Engineering Group, Faculty of Technology
来源
Biological Cybernetics | 2005年 / 93卷
关键词
Target Position; Combine Model; Density Model; Unsupervised Learning; Motor Command;
D O I
暂无
中图分类号
学科分类号
摘要
For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target’s position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.
引用
收藏
页码:119 / 130
页数:11
相关论文
共 50 条
  • [1] Learning visuomotor transformations for gaze-control and grasping
    Hoffmann, H
    Schenck, W
    Möller, R
    BIOLOGICAL CYBERNETICS, 2005, 93 (02) : 119 - 130
  • [2] Hand- and Gaze-Control of Telepresence Robots
    Zhang, Guangtao
    Hansen, John Paulin
    Minakata, Katsumi
    ETRA 2019: 2019 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, 2019,
  • [3] Intuitive Gaze-Control of a Robotized Flexible Endoscope
    Vrielink, T. J. C. Oude
    Puyal, J. Gonzalez-Bueno
    Kogkas, A.
    Darzi, A.
    Mylonas, G.
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 1776 - 1782
  • [4] Parietal mapping of visuomotor transformations during human tool grasping
    Stark, Alit
    Zohary, Ehud
    CEREBRAL CORTEX, 2008, 18 (10) : 2358 - 2368
  • [5] Designing and Evaluating a Social Gaze-Control System for a Humanoid Robot
    Zaraki, Abolfazl
    Mazzei, Daniele
    Giuliani, Manuel
    De Rossi, Danilo
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2014, 44 (02) : 157 - 168
  • [6] Effect of Road Conditions on Gaze-Control Interface in an Automotive Environment
    Biswas, Pradipta
    Dutt, Varun
    UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION: ACCESS TO THE HUMAN ENVIRONMENT AND CULTURE, UAHCI 2015, PT IV, 2015, 9178 : 108 - 116
  • [7] Robotic grasping and manipulation through human visuomotor learning
    Moore, Brian
    Oztop, Erhan
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2012, 60 (03) : 441 - 451
  • [8] A visuomotor control architecture for high-speed grasping
    Hashimoto, K
    Namiki, A
    Ishikawa, M
    PROCEEDINGS OF THE 40TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-5, 2001, : 15 - 20
  • [9] Cognitive plasticity induced by gaze-control technology: Gaze-typing improves performance in the antisaccade task
    Souto, David
    Marsh, Olivia
    Hutchinson, Claire
    Judge, Simon
    Paterson, Kevin B.
    COMPUTERS IN HUMAN BEHAVIOR, 2021, 122
  • [10] Spatiotemporal transformations for gaze control
    Sajad, Amirsaman
    Sadeh, Morteza
    Crawford, John Douglas
    PHYSIOLOGICAL REPORTS, 2020, 8 (16):