Learning Robotic Manipulation through Visual Planning and Acting

被引:0
|
作者
Wang, Angelina [1 ]
Kurutach, Thanard [1 ]
Liu, Kara [1 ]
Abbeel, Pieter [1 ]
Tamar, Aviv [2 ]
机构
[1] Univ Calif Berkeley, EECS Dept, Berkeley, CA 94720 USA
[2] Technion, Dept Elect Engn, Haifa, Israel
关键词
D O I
暂无
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Planning for robotic manipulation requires reasoning about the changes a robot can affect on objects. When such interactions can be modelled analytically, as in domains with rigid objects, efficient planning algorithms exist. However, in both domestic and industrial domains, the objects of interest can be soft, or deformable, and hard to model analytically. For such cases, we posit that a data-driven modelling approach is more suitable. In recent years, progress in deep generative models has produced methods that learn to 'imagine' plausible images from data. Building on the recent Causal InfoGAN generative model, in this work we learn to imagine goal-directed object manipulation directly from raw image data of self-supervised interaction of the robot with the object. After learning, given a goal observation of the system, our model can generate an imagined plan - a sequence of images that transition the object into the desired goal. To execute the plan, we use it as a reference trajectory to track with a visual servoing controller, which we also learn from the data as an inverse dynamics model. In a simulated manipulation task, we show that separating the problem into visual planning and visual tracking control is more sample efficient and more interpretable than alternative data-driven approaches. We further demonstrate our approach on learning to imagine and execute in 3 environments, the final of which is deformable rope manipulation on a PR2 robot.
引用
下载
收藏
页数:10
相关论文
共 50 条
  • [1] Learning Feasibility of Factored Nonlinear Programs in Robotic Manipulation Planning
    Ortiz-Haro, Joaquim
    Ha, Jung-Su
    Driess, Danny
    Karpas, Erez
    Toussaint, Marc
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 3729 - 3735
  • [2] Watch and Act: Learning Robotic Manipulation From Visual Demonstration
    Yang, Shuo
    Zhang, Wei
    Song, Ran
    Cheng, Jiyu
    Wang, Hesheng
    Li, Yibin
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (07): : 4404 - 4416
  • [3] Visual servoing with deep learning and data augmentation for robotic manipulation
    Liu J.
    Li Y.
    Journal of Advanced Computational Intelligence and Intelligent Informatics, 2020, 24 (07): : 953 - 962
  • [4] Reinforcement learning for appearance based visual servoing in robotic manipulation
    Khan, Umar
    Khan, Liaquat Ali
    Hussain, S. Zahid
    ROBOTICS, CONTROL AND MANUFACTURING TECHNOLOGY, 2008, : 161 - 168
  • [5] Robotic grasping and manipulation through human visuomotor learning
    Moore, Brian
    Oztop, Erhan
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2012, 60 (03) : 441 - 451
  • [6] A machine learning approach for visual recognition of complex parts in robotic manipulation
    Aivaliotis, P.
    Zampetis, A.
    Michalos, G.
    Makris, S.
    27TH INTERNATIONAL CONFERENCE ON FLEXIBLE AUTOMATION AND INTELLIGENT MANUFACTURING, FAIM2017, 2017, 11 : 423 - 430
  • [7] Paper: Visual Servoing with Deep Learning and Data Augmentation for Robotic Manipulation
    Liu, Jingshu
    Li, Yuan
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2020, 24 (07) : 953 - 962
  • [8] Continuous control actions learning and adaptation for robotic manipulation through reinforcement learning
    Asad Ali Shahid
    Dario Piga
    Francesco Braghin
    Loris Roveda
    Autonomous Robots, 2022, 46 : 483 - 498
  • [9] Continuous control actions learning and adaptation for robotic manipulation through reinforcement learning
    Shahid, Asad Ali
    Piga, Dario
    Braghin, Francesco
    Roveda, Loris
    AUTONOMOUS ROBOTS, 2022, 46 (03) : 483 - 498
  • [10] DYNAMICS AND PLANNING OF COLLISIONS IN ROBOTIC MANIPULATION
    YU, W
    PROCEEDINGS - 1989 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOL 1-3, 1989, : 478 - 483