Training Robots Without Robots: Deep Imitation Learning for Master-to-Robot Policy Transfer

被引:7
|
作者
Kim, Heecheol [1 ]
Ohmura, Yoshiyuki [1 ]
Nagakubo, Akihiko [2 ]
Kuniyoshi, Yasuo [1 ]
机构
[1] Univ Tokyo, Grad Sch Informat Sci & Technol, Lab Intelligent Syst & Informat, Bunkyo ku, Tokyo 1130023, Japan
[2] Natl Inst Adv Ind Sci & Technol, Artificial Intelligence Res Ctr, Tsukuba, Ibaraki 3058568, Japan
关键词
Imitation learning; deep learning in grasping and manipulation; dual arm manipulation; force and tactile sensing; MOVEMENTS; TASK; EYE;
D O I
10.1109/LRA.2023.3262423
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Deep imitation learning is promising for robot manipulation because it only requires demonstration samples. In this study, deep imitation learning is applied to tasks that require force feedback. However, existing demonstration methods have deficiencies; bilateral teleoperation requires a complex control scheme and is expensive, and kinesthetic teaching suffers from visual distractions from human intervention. This research proposes a new master-to-robot (M2R) policy transfer system that does not require robots for teaching force feedback-based manipulation tasks. The human directly demonstrates a task using a controller. This controller resembles the kinematic parameters of the robot arm and uses the same end-effector with force/torque (F/T) sensors to measure the force feedback. Using this controller, the operator can feel force feedback without a bilateral system. The proposed method can overcome domain gaps between the master and robot using gaze-based imitation learning and a simple calibration method. Furthermore, a Transformer is applied to infer policy from F/T sensory input. The proposed system was evaluated on a bottle-cap-opening task that requires force feedback.
引用
收藏
页码:2906 / 2913
页数:8
相关论文
共 50 条
  • [1] Training of construction robots using imitation learning and environmental rewards
    Duan, Kangkang
    Zou, Zhengbo
    Yang, T. Y.
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2024,
  • [2] Is imitation learning the route to humanoid robots?
    Schaal, S
    TRENDS IN COGNITIVE SCIENCES, 1999, 3 (06) : 233 - 242
  • [3] Imitation for Motor Learning on Humanoid Robots
    Aguirre, Andres
    Tejera, Gonzalo
    Baliosian, Javier
    2017 LATIN AMERICAN ROBOTICS SYMPOSIUM (LARS) AND 2017 BRAZILIAN SYMPOSIUM ON ROBOTICS (SBR), 2017,
  • [4] A developmental roadmap for learning by imitation in robots
    Lopes, Manuel
    Santos-Victor, Jose
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2007, 37 (02): : 308 - 321
  • [5] Towards an imitation system for learning robots
    Maistros, G
    Hayes, G
    METHODS AND APPLICATIONS OF ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, 3025 : 246 - 255
  • [6] Is imitation learning the route to humanoid robots?
    Trends Cognit Sci, 6 (233-242):
  • [7] Learning for a Robot: Deep Reinforcement Learning, Imitation Learning, Transfer Learning
    Hua, Jiang
    Zeng, Liangcai
    Li, Gongfa
    Ju, Zhaojie
    SENSORS, 2021, 21 (04) : 1 - 21
  • [8] Learning for a robot: Deep reinforcement learning, imitation learning, transfer learning
    Hua, Jiang
    Zeng, Liangcai
    Li, Gongfa
    Ju, Zhaojie
    Sensors (Switzerland), 2021, 21 (04): : 1 - 21
  • [9] Robots, Pancakes, and Computer Games: Designing Serious Games for Robot Imitation Learning
    Walther-Franks, Benjamin
    Smeddinck, Jan
    Szmidt, Peter
    Haidu, Andrei
    Beetz, Michael
    Malaka, Rainer
    CHI 2015: PROCEEDINGS OF THE 33RD ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2015, : 3623 - 3632
  • [10] On Training Flexible Robots using Deep Reinforcement Learning
    Dwiel, Zach
    Candadai, Madhavun
    Phielipp, Mariano
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4666 - 4671