Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning

被引:0
|
作者
Zeng, Andy [1 ,2 ]
Song, Shuran [1 ,2 ]
Welker, Stefan [2 ]
Lee, Johnny [2 ]
Rodriguez, Alberto [3 ]
Funkhouser, Thomas [1 ,2 ]
机构
[1] Princeton Univ, Princeton, NJ 08544 USA
[2] Google, Mountain View, CA 94043 USA
[3] MIT, Cambridge, MA 02139 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Skilled robotic manipulation benefits from complex synergies between non-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing can help rearrange cluttered objects to make space for arms and fingers; likewise, grasping can help displace objects to make pushing movements more precise and collision-free. In this work, we demonstrate that it is possible to discover and learn these synergies from scratch through model-free deep reinforcement learning. Our method involves training two fully convolutional networks that map from visual observations to actions: one infers the utility of pushes for a dense pixel-wise sampling of end-effector orientations and locations, while the other does the same for grasping. Both networks are trained jointly in a Q-learning framework and are entirely self-supervised by trial and error, where rewards are provided from successful grasps. In this way, our policy learns pushing motions that enable future grasps, while learning grasps that can leverage past pushes. During picking experiments in both simulation and real-world scenarios, we find that our system quickly learns complex behaviors even amid challenging cases of tightly packed clutter, and achieves better grasping success rates and picking efficiencies than baseline alternatives after a few hours of training. We further demonstrate that our method is capable of generalizing to novel objects. Qualitative results (videos), code, pre-trained models, and simulation environments are available at http://vpg.es.prineeton.edu
引用
收藏
页码:4238 / 4245
页数:8
相关论文
共 50 条
  • [1] Deep Reinforcement Learning Based Pushing and Grasping Model with Frequency Domain Mapping and Supervised Learning
    Cao, Weiliang
    Cao, Zhenwei
    Song, Yong
    [J]. 2023 IEEE 2ND INDUSTRIAL ELECTRONICS SOCIETY ANNUAL ON-LINE CONFERENCE, ONCON, 2023,
  • [2] Intrinsically Motivated Self-Supervised Deep Sensorimotor Learning for Grasping
    Takahashi, Takeshi
    Lanighan, Michael W.
    Grupen, Roderic A.
    [J]. 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 3496 - 3502
  • [3] Variational Dynamic for Self-Supervised Exploration in Deep Reinforcement Learning
    Bai, Chenjia
    Liu, Peng
    Liu, Kaiyu
    Wang, Lingxiao
    Zhao, Yingnan
    Han, Lei
    Wang, Zhaoran
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4776 - 4790
  • [4] Intrinsically Motivated Self-supervised Learning in Reinforcement Learning
    Zhao, Yue
    Du, Chenzhuang
    Zhao, Hang
    Li, Tiejun
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 3605 - 3615
  • [5] Self-Supervised Reinforcement Learning for Recommender Systems
    Xin, Xin
    Karatzoglou, Alexandros
    Arapakis, Ioannis
    Jose, Joemon M.
    [J]. PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 931 - 940
  • [6] Comparison between Supervised and Self-supervised Deep Learning for SEM Image Denoising
    Okud, Tomoyuki
    Chen, Jun
    Motoyoshi, Takahiro
    Yumiba, Ryou
    Ishikawa, Masayoshi
    Toyoda, Yasutaka
    [J]. METROLOGY, INSPECTION, AND PROCESS CONTROL XXXVII, 2023, 12496
  • [7] Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation
    Kahn, Gregory
    Villaflor, Adam
    Ding, Bosen
    Abbeel, Pieter
    Levine, Sergey
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 5129 - 5136
  • [8] Improving Data Efficiency of Self-supervised Learning for Robotic Grasping
    Berscheid, Lars
    Ruehr, Thomas
    Kroeger, Torsten
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 2125 - 2131
  • [9] Active Pushing for Better Grasping in Dense Clutter with Deep Reinforcement Learning
    Lu, Ning
    Lu, Tao
    Cai, Yinghao
    Wang, Shuo
    [J]. 2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 1657 - 1663
  • [10] Deep active sampling with self-supervised learning
    Shi, Haochen
    Zhou, Hui
    [J]. FRONTIERS OF COMPUTER SCIENCE, 2023, 17 (04)