Living Object Grasping Using Two-Stage Graph Reinforcement Learning

被引:6
|
作者
Hu, Zhe [1 ,2 ]
Zheng, Yu [2 ]
Pan, Jia [3 ]
机构
[1] City Univ Hong Kong, Dept Biomed Engn, Kowloon Tong, Hong Kong 999077, Peoples R China
[2] Tencent Robot X, Shenzhen 518057, Guangdong, Peoples R China
[3] Univ Hong Kong, Dept Comp Sci, Pokfulam, Hong Kong, Peoples R China
来源
关键词
Deep learning in grasping and manipulation; dexterous manipulation; grasping; in-hand manipulation; reinforcement learning;
D O I
10.1109/LRA.2021.3060636
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Living objects are hard to grasp because they can actively dodge and struggle by writhing or deforming while or even prior to being contacted and modeling or predicting their responses to grasping is extremely difficult. This letter presents an algorithm based on reinforcement learning (RL) to attack this challenging problem. Considering the complexity of living object grasping, we divide the whole task into pre-grasp and in-hand stages and let the algorithm switch between the stages automatically. The pre-grasp stage is aimed at finding a good pose of a robot hand approaching a living object for performing a grasp. Dense reward functions are proposed for facilitating the learning of right hand actions based on the poses of both hand and object. Since an object held in hand may struggle to escape, the robot hand needs to adjust its configuration and respond correctly to the object's movement. Hence, the goal of the in-hand stage is to determine an appropriate adjustment of finger configuration in order for the robot hand to keep holding the object. At this stage, we treat the robot hand as a graph and use the graph convolutional network (GCN) to determine the hand action. We test our algorithm with both simulation and real experiments, which show its good performance in living object grasping. More results are available on our website: https://sites.google.com/view/graph-rl.
引用
收藏
页码:1950 / 1957
页数:8
相关论文
共 50 条
  • [1] Autonomous Two-stage Object Retrieval Using Supervised and Reinforcement Learning
    Rouillard, Thibault
    Howard, Ian
    Cui, Lei
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (ICMA), 2019, : 780 - 786
  • [2] Spectrum Access In Cognitive Radio Using a Two-Stage Reinforcement Learning Approach
    Raj, Vishnu
    Dias, Irene
    Tholeti, Thulasi
    Kalyani, Sheetal
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, 12 (01) : 20 - 34
  • [3] Grasping Living Objects With Adversarial Behaviors Using Inverse Reinforcement Learning
    Hu, Zhe
    Zheng, Yu
    Pan, Jia
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (02) : 1151 - 1163
  • [4] Two-stage fuzzy object grasping controller for a humanoid robot with proximal policy optimization
    Kuo, Ping-Huan
    Chen, Kuan-Lin
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 125
  • [5] Deep reinforcement learning based moving object grasping
    Chen, Pengzhan
    Lu, Weiqing
    [J]. INFORMATION SCIENCES, 2021, 565 : 62 - 76
  • [6] Two-Stage Evolutionary Reinforcement Learning for Enhancing Exploration and Exploitation
    Zhu, Qingling
    Wu, Xiaoqiang
    Lin, Qiuzhen
    Chen, Wei-Neng
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 18, 2024, : 20892 - 20900
  • [7] Two-stage reinforcement learning task predicts psychological traits
    Trevino, Mario
    Castiello, Santiago
    De la Torre-Valdovinos, Braniff
    Carrasco, Paulina Osuna
    Leon, Ricardo Medina-Coss
    Arias-Carrion, Oscar
    [J]. PSYCH JOURNAL, 2023, 12 (03) : 355 - 367
  • [8] Two-Stage Hybrid Network Clustering Using Multi-Agent Reinforcement Learning
    Kim, Joohyun
    Ryu, Dongkwan
    Kim, Juyeon
    Kim, Jae-Hoon
    [J]. ELECTRONICS, 2021, 10 (03) : 1 - 16
  • [9] Deep Model Compression via Two-Stage Deep Reinforcement Learning
    Zhan, Huixin
    Lin, Wei-Ming
    Cao, Yongcan
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, 2021, 12975 : 238 - 254
  • [10] Two-Stage Reinforcement Learning Algorithm for Quick Cooperation in Repeated Games
    Fujita, Wataru
    Moriyama, Koichi
    Fukui, Ken-ichi
    Numao, Masayuki
    [J]. TRANSACTIONS ON COMPUTATIONAL COLLECTIVE INTELLIGENCE XXVIII, 2018, 10780 : 48 - 65