Situated robot learning for multi-modal instruction and imitation of grasping

被引:45
|
作者
Steil, M [1 ]
Röthling, F [1 ]
Haschke, R [1 ]
Ritter, H [1 ]
机构
[1] Univ Bielefeld, Fac Technol, Neuroinformat Grp, D-33501 Bielefeld, Germany
关键词
interactive demonstration; imitation; learning; architecture; grasping;
D O I
10.1016/j.robot.2004.03.007
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A key prerequisite to make user instruction of work tasks by interactive demonstration effective and convenient is situated multi-modal interaction aiming at an enhancement of robot learning beyond simple low-level skill acquisition. We report the status of the Bielefeld GRAVIS-robot system that combines visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation to allow multi-modal task-oriented instructions. With respect to this platform, we discuss the essential role of learning for robust functioning of the robot and sketch the concept of an integrated architecture for situated learning on the system level. It has the long-term goal to demonstrate speech-supported imitation learning of robot actions. We describe the current state of its realization to enable imitation of human hand postures for flexible grasping and give quantitative results for grasping a broad range of everyday objects. (C) 2004 Elsevier B.V. All rights reserved.
引用
收藏
页码:129 / 141
页数:13
相关论文
共 50 条
  • [21] A robot grasping detection network based on flexible selection of multi-modal feature fusion structure
    Wang, Yuhan
    Guo, Zhibo
    Chen, Yu
    Guo, Chaiqi
    Xia, Meizhen
    Qi, Tingyue
    [J]. APPLIED INTELLIGENCE, 2024, 54 (06) : 5044 - 5061
  • [22] Interactive multi-modal robot programming
    Iba, S
    Paredis, CJJ
    Khosla, PK
    [J]. 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2002, : 161 - 168
  • [23] Multi-modal Controls of A Smart Robot
    Mishra, Anurag
    Makula, Pooja
    Kumar, Akshay
    Karan, Krit
    Mittal, V. K.
    [J]. 2015 ANNUAL IEEE INDIA CONFERENCE (INDICON), 2015,
  • [24] Interactive multi-modal robot programming
    Iba, S
    Paredis, CJJ
    Khosla, PK
    [J]. EXPERIMENTAL ROBOTICS IX, 2006, 21 : 503 - +
  • [25] Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping
    Yan, Mengyuan
    Li, Adrian
    Kalakrishnan, Mrinal
    Pastor, Peter
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 4804 - 4810
  • [26] Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets
    Hausman, Karol
    Chebotar, Yevgen
    Schaal, Stefan
    Sukhatme, Gaurav
    Lim, Joseph J.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [27] Triple-GAIL: A Multi-Modal Imitation Learning Framework with Generative Adversarial Nets
    Fei, Cong
    Wang, Bin
    Zhuang, Yuzheng
    Zhang, Zongzhang
    Hao, Jianye
    Zhang, Hongbo
    Ji, Xuewu
    Liu, Wulong
    [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2929 - 2935
  • [28] Unsupervised Multi-modal Learning
    Iqbal, Mohammed Shameer
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE (AI 2015), 2015, 9091 : 343 - 346
  • [29] Learning Multi-modal Similarity
    McFee, Brian
    Lanckriet, Gert
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 491 - 523
  • [30] A bioinspired multi-modal flying and walking robot
    Daler, Ludovic
    Mintchev, Stefano
    Stefanini, Cesare
    Floreano, Dario
    [J]. BIOINSPIRATION & BIOMIMETICS, 2015, 10 (01)