Reinforcement Learning Based Pushing and Grasping Objects from Ungraspable Poses

被引:3
|
作者
Zhang, Hao [1 ,2 ,3 ]
Liang, Hongzhuo [3 ]
Cong, Lin [3 ]
Lyu, Jianzhi [3 ]
Zeng, Long [1 ]
Feng, Pingfa [1 ]
Zhang, Jianwei [3 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Div Adv Mfg, Shenzhen, Peoples R China
[2] Rhein Westfal TH Aachen, Prod Syst Engn, Aachen, Germany
[3] Univ Hamburg, Grp TAMS, Dept Informat, Hamburg, Germany
基金
中国国家自然科学基金; 美国国家科学基金会; 欧盟地平线“2020”;
关键词
D O I
10.1109/ICRA48891.2023.10160491
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Grasping an object when it is in an ungraspable pose is a challenging task, such as books or other large flat objects placed horizontally on a table. Inspired by human manipulation, we address this problem by pushing the object to the edge of the table and then grasping it from the hanging part. In this paper, we develop a model-free Deep Reinforcement Learning framework to synergize pushing and grasping actions. We first pre-train a Variational Autoencoder to extract high-dimensional features of input scenario images. One Proximal Policy Optimization algorithm with the common reward and sharing layers of Actor-Critic is employed to learn both pushing and grasping actions with high data efficiency. Experiments show that our one network policy can converge 2.5 times faster than the policy using two parallel networks. Moreover, the experiments on unseen objects show that our policy can generalize to the challenging case of objects with curved surfaces and off-center irregularly shaped objects. Lastly, our policy can be transferred to a real robot without fine-tuning by using CycleGAN for domain adaption and outperforms the push-to-wall baseline.
引用
收藏
页码:3860 / 3866
页数:7
相关论文
共 50 条
  • [41] Physics-Based Dexterous Manipulations with Estimated Hand Poses and Residual Reinforcement Learning
    Garcia-Hernando, Guillermo
    Johns, Edward
    Kim, Tae-Kyun
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 9561 - 9568
  • [42] Stiffness Control for a Soft Robotic Finger based on Reinforcement Learning for Robust Grasping
    Dai, Junyue
    Zhu, Mingzhu
    Feng, Yu
    2021 27TH INTERNATIONAL CONFERENCE ON MECHATRONICS AND MACHINE VISION IN PRACTICE (M2VIP), 2021,
  • [43] A Visual Grasping Strategy for Improving Assembly Efficiency Based on Deep Reinforcement Learning
    Wang, Yongzhi
    Zhu, Sicheng
    Zhang, Qian
    Zhou, Ran
    Dou, Rutong
    Sun, Haonan
    Yao, Qingfeng
    Xu, Mingwei
    Zhang, Yu
    JOURNAL OF SENSORS, 2021, 2021
  • [44] Learning Continuous Control Actions for Robotic Grasping with Reinforcement Learning
    Shahid, Asad Ali
    Roveda, Loris
    Piga, Dario
    Braghin, Francesco
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 4066 - 4072
  • [45] T-TD3: A Reinforcement Learning Framework for Stable Grasping of Deformable Objects Using Tactile Prior
    Zhou, Yanmin
    Jin, Yiyang
    Lu, Ping
    Jiang, Shuo
    Wang, Zhipeng
    He, Bin
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 : 6208 - 6222
  • [46] An experimental approach to robotic grasping using reinforcement learning and generic grasping functions
    Moussa, MA
    Kamel, MS
    1996 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, PROCEEDINGS, VOLS 1-4, 1996, : 2767 - 2773
  • [47] From grasping objects to cognitive abilities
    Fogassi, L.
    FOLIA PRIMATOLOGICA, 2004, 75 : 82 - 83
  • [48] Learning Contact Locations for Pushing and Orienting Unknown Objects
    Hermans, Tucker
    Li, Fuxin
    Rehg, James M.
    Bobick, Aaron F.
    2013 13TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2013, : 435 - 442
  • [49] Robot Learning of Shifting Objects for Grasping in Cluttered Environments
    Berscheid, Lars
    Meissner, Pascal
    Kroger, Torsten
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 612 - 618
  • [50] A Self-supervised Learning Method of Target Pushing-Grasping Skills Based on Affordance Map
    Wu, Peiliang
    Liu, Ruijun
    Mao, Bingyi
    Shi, Haoyang
    Chen, Wenbai
    Gao, Guowei
    Jiqiren/Robot, 2022, 44 (04): : 385 - 398