Learning Robotic Manipulation through Visual Planning and Acting

被引:0
|
作者
Wang, Angelina [1 ]
Kurutach, Thanard [1 ]
Liu, Kara [1 ]
Abbeel, Pieter [1 ]
Tamar, Aviv [2 ]
机构
[1] Univ Calif Berkeley, EECS Dept, Berkeley, CA 94720 USA
[2] Technion, Dept Elect Engn, Haifa, Israel
关键词
D O I
暂无
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Planning for robotic manipulation requires reasoning about the changes a robot can affect on objects. When such interactions can be modelled analytically, as in domains with rigid objects, efficient planning algorithms exist. However, in both domestic and industrial domains, the objects of interest can be soft, or deformable, and hard to model analytically. For such cases, we posit that a data-driven modelling approach is more suitable. In recent years, progress in deep generative models has produced methods that learn to 'imagine' plausible images from data. Building on the recent Causal InfoGAN generative model, in this work we learn to imagine goal-directed object manipulation directly from raw image data of self-supervised interaction of the robot with the object. After learning, given a goal observation of the system, our model can generate an imagined plan - a sequence of images that transition the object into the desired goal. To execute the plan, we use it as a reference trajectory to track with a visual servoing controller, which we also learn from the data as an inverse dynamics model. In a simulated manipulation task, we show that separating the problem into visual planning and visual tracking control is more sample efficient and more interpretable than alternative data-driven approaches. We further demonstrate our approach on learning to imagine and execute in 3 environments, the final of which is deformable rope manipulation on a PR2 robot.
引用
下载
收藏
页数:10
相关论文
共 50 条
  • [21] Bio-inspired Reflex System for Learning Visual Information for Resilient Robotic Manipulation
    Junge, Kai
    Qiu, Kevin
    Hughes, Josie
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 56 - 62
  • [22] Probabilistic Visual Verification for Robotic Assembly Manipulation
    Choi, Changhyun
    Rus, Daniela
    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2016, : 5656 - 5663
  • [23] Enabling Visual Action Planning for Object Manipulation Through Latent Space Roadmap
    Lippi, Martina
    Poklukar, Petra
    Welle, Michael C.
    Varava, Anastasia
    Yin, Hang
    Marino, Alessandro
    Kragic, Danica
    IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (01) : 57 - 75
  • [24] Survey of imitation learning for robotic manipulation
    Bin Fang
    Shidong Jia
    Di Guo
    Muhua Xu
    Shuhuan Wen
    Fuchun Sun
    International Journal of Intelligent Robotics and Applications, 2019, 3 : 362 - 369
  • [25] Survey of imitation learning for robotic manipulation
    Fang, Bin
    Jia, Shidong
    Guo, Di
    Xu, Muhua
    Wen, Shuhuan
    Sun, Fuchun
    INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS, 2019, 3 (04) : 362 - 369
  • [26] Learning the Dynamics of Doors for Robotic Manipulation
    Endres, Felix
    Trinkle, Jeff
    Burgard, Wolfram
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 3543 - 3549
  • [27] Robot learning of manipulation activities with overall planning through precedence graph
    Ye, Xin
    Lin, Zhe
    Yang, Yezhou
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 116 : 126 - 135
  • [28] Prediction learning in robotic pushing manipulation
    Kopicki, Marek
    Wyatt, Jeremy
    Stolkin, Rustam
    ICAR: 2009 14TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS, VOLS 1 AND 2, 2009, : 740 - 745
  • [29] Geometric Reinforcement Learning for Robotic Manipulation
    Alhousani, Naseem
    Saveriano, Matteo
    Sevinc, Ibrahim
    Abdulkuddus, Talha
    Kose, Hatice
    Abu-Dakka, Fares J.
    IEEE ACCESS, 2023, 11 : 111492 - 111505
  • [30] Experimental manipulation of muscularity preferences through visual diet and associative learning
    Jacques, Katy
    Evans, Elizabeth
    Boothroyd, Lynda
    PLOS ONE, 2021, 16 (08):