Situated robot learning for multi-modal instruction and imitation of grasping

被引:44
|
作者
Steil, M [1 ]
Röthling, F [1 ]
Haschke, R [1 ]
Ritter, H [1 ]
机构
[1] Univ Bielefeld, Fac Technol, Neuroinformat Grp, D-33501 Bielefeld, Germany
关键词
interactive demonstration; imitation; learning; architecture; grasping;
D O I
10.1016/j.robot.2004.03.007
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A key prerequisite to make user instruction of work tasks by interactive demonstration effective and convenient is situated multi-modal interaction aiming at an enhancement of robot learning beyond simple low-level skill acquisition. We report the status of the Bielefeld GRAVIS-robot system that combines visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation to allow multi-modal task-oriented instructions. With respect to this platform, we discuss the essential role of learning for robust functioning of the robot and sketch the concept of an integrated architecture for situated learning on the system level. It has the long-term goal to demonstrate speech-supported imitation learning of robot actions. We describe the current state of its realization to enable imitation of human hand postures for flexible grasping and give quantitative results for grasping a broad range of everyday objects. (C) 2004 Elsevier B.V. All rights reserved.
引用
收藏
页码:129 / 141
页数:13
相关论文
共 50 条
  • [1] Multi-Modal Geometric Learning for Grasping and Manipulation
    Watkins-Valls, David
    Varley, Jacob
    Allen, Peter
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 7339 - 7345
  • [2] Burn-In Demonstrations for Multi-Modal Imitation Learning
    Kuefler, Alex
    Kochenderfer, Mykel J.
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 1071 - 1078
  • [3] Multi-Modal Imitation Learning Method with Cosine Similarity
    Hao, Shaopu
    Liu, Quan
    Xu, Ping'an
    Zhang, Lihua
    Huang, Zhigang
    [J]. Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2023, 60 (06): : 1358 - 1372
  • [4] Multi-Modal Transfer Learning for Grasping Transparent and Specular Objects
    Weng, Thomas
    Pallankize, Amith
    Tang, Yimin
    Kroemer, Oliver
    Held, David
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (03) : 3796 - 3803
  • [5] BAGAIL: Multi-modal imitation learning from imbalanced demonstrations
    Gu, Sijia
    Zhu, Fei
    [J]. NEURAL NETWORKS, 2024, 174
  • [6] Multi-modal human-machine communication for instructing robot grasping tasks
    McGuire, P
    Fritsch, J
    Steil, JJ
    Röthling, F
    Fink, GA
    Wachsmuth, S
    Sagerer, G
    Ritter, H
    [J]. 2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS, 2002, : 1082 - 1088
  • [7] Instruction-ViT: Multi-modal prompts for instruction learning in vision transformer
    Xiao, Zhenxiang
    Chen, Yuzhong
    Yao, Junjie
    Zhang, Lu
    Liu, Zhengliang
    Wu, Zihao
    Yu, Xiaowei
    Pan, Yi
    Zhao, Lin
    Ma, Chong
    Liu, Xinyu
    Liu, Wei
    Li, Xiang
    Yuan, Yixuan
    Shen, Dinggang
    Zhu, Dajiang
    Yao, Dezhong
    Liu, Tianming
    Jiang, Xi
    [J]. INFORMATION FUSION, 2024, 104
  • [8] Learning to Detect Multi-Modal Grasps for Dexterous Grasping in Dense Clutter
    Corsaro, Matt
    Tellex, Stefanie
    Konidaris, George
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 4647 - 4653
  • [9] Multi-modal Robot Apprenticeship: Imitation Learning using Linearly Decayed DMP plus in a Human-Robot Dialogue System
    Wu, Yan
    Wang, Ruohan
    D'Haro, Luis F.
    Banchs, Rafael E.
    Tee, Keng Peng
    [J]. 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 8582 - 8588
  • [10] Online Multi-modal Imitation Learning via Lifelong Intention Encoding
    Piao, Songhao
    Huang, Yue
    Liu, Huaping
    [J]. 2019 IEEE 4TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2019), 2019, : 786 - 792