Collaborative Pushing and Grasping of Tightly Stacked Objects via Deep Reinforcement Learning

被引:1
|
作者
Yuxiang Yang [1 ,2 ]
Zhihao Ni [1 ,2 ]
Mingyu Gao [1 ,2 ]
Jing Zhang [3 ,4 ]
Dacheng Tao [3 ,5 ]
机构
[1] the School of Electronics and Information, Hangzhou Dianzi University
[2] Zhejiang Provincial Key Laboratory of Equipment Electronics
[3] IEEE
[4] the School of Computer Science, Faculty of Engineering, University of Sydney
[5] JD Explore Academy,JD.com
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP242 [机器人]; TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1111 ; 1405 ;
摘要
Directly grasping the tightly stacked objects may cause collisions and result in failures, degenerating the functionality of robotic arms. Inspired by the observation that first pushing objects to a state of mutual separation and then grasping them individually can effectively increase the success rate, we devise a novel deep Q-learning framework to achieve collaborative pushing and grasping. Specifically, an efficient nonmaximum suppression policy(Policy NMS) is proposed to dynamically evaluate pushing and grasping actions by enforcing a suppression constraint on unreasonable actions. Moreover, a novel data-driven pushing reward network called PR-Net is designed to effectively assess the degree of separation or aggregation between objects. To benchmark the proposed method, we establish a dataset containing common household items dataset(CHID) in both simulation and real scenarios.Although trained using simulation data only, experiment results validate that our method generalizes well to real scenarios and achieves a 97% grasp success rate at a fast speed for object separation in the real-world environment.
引用
收藏
页码:135 / 145
页数:11
相关论文
共 50 条
  • [21] A novel robotic grasping method for moving objects based on multi-agent deep reinforcement learning
    Huang, Yu
    Liu, Daxin
    Liu, Zhenyu
    Wang, Ke
    Wang, Qide
    Tan, Jianrong
    [J]. ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2024, 86
  • [22] Deep reinforcement learning based moving object grasping
    Chen, Pengzhan
    Lu, Weiqing
    [J]. INFORMATION SCIENCES, 2021, 565 : 62 - 76
  • [23] Learning Pushing Skills Using Object Detection and Deep Reinforcement Learning
    Guo, Wei
    Dong, Guantao
    Chen, Chen
    Li, Mantian
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (ICMA), 2019, : 469 - 474
  • [24] Grasping Living Objects With Adversarial Behaviors Using Inverse Reinforcement Learning
    Hu, Zhe
    Zheng, Yu
    Pan, Jia
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (02) : 1151 - 1163
  • [25] Collaborative Data Scheduling for Vehicular Edge Computing via Deep Reinforcement Learning
    Luo, Quyuan
    Li, Changle
    Luan, Tom H.
    Shi, Weisong
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (10): : 9637 - 9650
  • [26] Vision-Based Robotic Arm Control Algorithm Using Deep Reinforcement Learning for Autonomous Objects Grasping
    Sekkat, Hiba
    Tigani, Smail
    Saadane, Rachid
    Chehri, Abdellah
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (17):
  • [27] Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment
    Deng, Yuhong
    Quo, Xiaofeng
    Wei, Yixuan
    Lu, Kai
    Fang, Bin
    Guo, Di
    Liu, Huaping
    Sun, Fuchun
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 619 - 626
  • [28] COLLABORATIVE DEEP REINFORCEMENT LEARNING FOR IMAGE CROPPING
    Li, Zhuopeng
    Zhang, Xiaoyan
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 254 - 259
  • [29] Robot autonomous grasping and assembly skill learning based on deep reinforcement learning
    Chen, Chengjun
    Zhang, Hao
    Pan, Yong
    Li, Dongnian
    [J]. INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2024, 130 (11-12): : 5233 - 5249
  • [30] Robot autonomous grasping and assembly skill learning based on deep reinforcement learning
    Chengjun Chen
    Hao Zhang
    Yong Pan
    Dongnian Li
    [J]. The International Journal of Advanced Manufacturing Technology, 2024, 130 : 5233 - 5249