Visually-guided grasping while walking on a humanoid robot

被引:40
|
作者
Mansard, Nicolas [1 ]
Stasse, Olivier [2 ]
Chaumette, Francois [1 ]
Yokoi, Kazuhito [2 ]
机构
[1] IRISA INRIA Rennes, Rennes, France
[2] CNRS, JRL, ISRI AIST, Tsukuba, Ibaraki, Japan
关键词
D O I
10.1109/ROBOT.2007.363934
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we apply a general framework for building complex whole-body control for highly redundant robot, and we propose to implement it for visually-guided grasping while walking on a humanoid robot. The key idea is to divide the control into several sensor-based control tasks that are simultaneously executed by a general structure called stack of tasks. This structure enables a very simple access for task sequencing, and can be used for task-level control. This framework was applied for a visual servoing task. The robot walks along a planed path, keeping the specified object in the middle of its field of view and finally, when it is close enough, the robot grasps the object while walking.
引用
收藏
页码:3041 / +
页数:3
相关论文
共 50 条
  • [1] Visually-Guided Adaptive Robot (ViGuAR)
    Livitz, Gennady
    Ames, Heather
    Chandler, Ben
    Gorchetchnikov, Anatoli
    Leveille, Jasmin
    Vasilkoski, Zlatko
    Versace, Massimiliano
    Mingolla, Ennio
    Snider, Greg
    Amerson, Rick
    Carter, Dick
    Abdalla, Hisham
    Qureshi, Muhammad Shakeel
    [J]. 2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 2944 - 2951
  • [2] Gaze strategies during visually-guided versus memory-guided grasping
    Prime, Steven L.
    Marotta, Jonathan J.
    [J]. EXPERIMENTAL BRAIN RESEARCH, 2013, 225 (02) : 291 - 305
  • [3] Gaze strategies during visually-guided versus memory-guided grasping
    Steven L. Prime
    Jonathan J. Marotta
    [J]. Experimental Brain Research, 2013, 225 : 291 - 305
  • [4] PLAYBOT - A visually-guided robot for physically disabled children
    Tsotsos, JK
    Verghese, G
    Dickinson, S
    Jenkin, M
    Jepson, A
    Milios, E
    Nuflo, F
    Stevenson, S
    Black, M
    Metaxas, D
    Culhane, S
    Ye, Y
    Mann, R
    [J]. IMAGE AND VISION COMPUTING, 1998, 16 (04) : 275 - 292
  • [5] Natural landmark detection for visually-guided robot navigation
    Celaya, Enric
    Albarral, Jose-Luis
    Jimenez, Pablo
    Torras, Carme
    [J]. AI(ASTERISK)IA 2007: ARTIFICIAL INTELLIGENCE AND HUMAN-ORIENTED COMPUTING, 2007, 4733 : 555 - 566
  • [6] Spatial Attention Biases in Visually-Guided Grasping Amongst Healthy Adults
    de Bruin, Natalie
    Gonzalez, Claudia
    [J]. CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY-REVUE CANADIENNE DE PSYCHOLOGIE EXPERIMENTALE, 2013, 67 (04): : 305 - 305
  • [7] Erratum to: Gaze strategies during visually-guided versus memory-guided grasping
    Steven L. Prime
    Jonathan J. Marotta
    [J]. Experimental Brain Research, 2013, 225 (2) : 307 - 307
  • [8] Visually-guided robot navigation:: From artificial to natural landmarks
    Celaya, Enric
    Albarral, Jose-Luis
    Jimenez, Pablo
    Torras, Carme
    [J]. FIELD AND SERVICE ROBOTICS: RESULTS OF THE 6TH INTERNATIONAL CONFERENCE, 2008, 42 : 287 - 296
  • [9] Hand-Eye Calibration in Visually-Guided Robot Grinding
    Li, Wen-Long
    Xie, He
    Zhang, Gang
    Yan, Si-Jie
    Yin, Zhou-Ping
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2016, 46 (11) : 2634 - 2642
  • [10] A cross-sectional developmental examination of the SNARC effect in a visually-guided grasping task
    Mills, Kelly J.
    Rousseau, Ben R.
    Gonzalez, Claudia L. R.
    [J]. NEUROPSYCHOLOGIA, 2014, 58 (01) : 99 - 106