On robot grasp learning using equivariant models

被引:1
|
作者
Zhu, Xupeng [1 ]
Wang, Dian [1 ]
Su, Guanang [1 ]
Biza, Ondrej [1 ]
Walters, Robin [1 ]
Platt, Robert [1 ]
机构
[1] Northeastern Univ, Khoury Coll Comp Sci, Huntington Ave, Boston, MA 02115 USA
关键词
Grasping; Equivariant models; On robot learning; Sample efficiency; Reinforcement learning; Transparent object grasping; DATASET;
D O I
10.1007/s10514-023-10112-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Real-world grasp detection is challenging due to the stochasticity in grasp dynamics and the noise in hardware. Ideally, the system would adapt to the real world by training directly on physical systems. However, this is generally difficult due to the large amount of training data required by most grasp learning models. In this paper, we note that the planar grasp function is SE(2)-equivariant and demonstrate that this structure can be used to constrain the neural network used during learning. This creates an inductive bias that can significantly improve the sample efficiency of grasp learning and enable end-to-end training from scratch on a physical robot with as few as 600 grasp attempts. We call this method Symmetric Grasp learning (SymGrasp) and show that it can learn to grasp "from scratch" in less that 1.5 h of physical robot time. This paper represents an expanded and revised version of the conference paper Zhu et al. (2022).
引用
收藏
页码:1175 / 1193
页数:19
相关论文
共 50 条
  • [31] Acquiring Grasp Strategies for a Multifingered Robot Hand Using Evolutionary Algorithms
    Hirayama, Chiaki
    Watanabe, Toshiya
    Kawabata, Shinji
    Suganuma, Masanori
    Nagao, Tomoharu
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2017, : 1597 - 1602
  • [32] Movement speed models of natural grasp and release used for an industrial robot equipped with a gripper
    Stoica M.
    Calangiu G.A.
    Sisak F.
    [J]. IFIP Advances in Information and Communication Technology, 2010, 314 : 223 - 230
  • [33] Task-Based Robot Grasp Planning Using Probabilistic Inference
    Song, Dan
    Ek, Carl Henrik
    Huebner, Kai
    Kragic, Danica
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2015, 31 (03) : 546 - 561
  • [34] CONTROL OF GRASP STIFFNESS USING A MULTIFINGERED ROBOT HAND WITH REDUNDANT JOINTS
    CHOI, HR
    CHUNG, WK
    YOUM, Y
    [J]. ROBOTICA, 1995, 13 : 351 - 362
  • [35] Robot grasp detection using multimodal deep convolutional neural networks
    Wang, Zhichao
    Li, Zhiqi
    Wang, Bin
    Liu, Hong
    [J]. ADVANCES IN MECHANICAL ENGINEERING, 2016, 8 (09) : 1 - 12
  • [36] Attractor and Lyapunov Models for Reach and Grasp Movements with Application to Robot-assisted Therapy
    Guastello, Stephen J.
    Nathan, Dominic E.
    Johnson, Michelle J.
    [J]. NONLINEAR DYNAMICS PSYCHOLOGY AND LIFE SCIENCES, 2009, 13 (01) : 99 - 121
  • [37] Movement Speed Models of Natural Grasp and Release Used for an Industrial Robot Equipped with a Gripper
    Stoica, Mihai
    Calangiu, Gabriela Andreea
    Sisak, Francisc
    [J]. EMERGING TRENDS IN TECHNOLOGICAL INNOVATION, 2010, 314 : 223 - 230
  • [38] Learning Probabilistic Discriminative Models of Grasp Affordances under Limited Supervision
    Erkan, Ayse Naz
    Kroemer, Oliver
    Detry, Renaud
    Altun, Yasemin
    Piater, Justus
    Peters, Jan
    [J]. IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, : 1586 - 1591
  • [39] Learning to Grasp Unknown Objects using Force Feedback
    Koh, Keng Huat
    Farhan, Musthafa
    Liu, Yan Fei
    Chan, Florence Hiu Ling
    Lai, King Wai Chiu
    [J]. 2017 IEEE 7TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2017, : 472 - 477
  • [40] Learning Suction Graspability Considering Grasp Quality and Robot Reachability for Bin-Picking
    Jiang, Ping
    Oaki, Junji
    Ishihara, Yoshiyuki
    Ooga, Junichiro
    Han, Haifeng
    Sugahara, Atsushi
    Tokura, Seiji
    Eto, Haruna
    Komoda, Kazuma
    Ogawa, Akihito
    [J]. FRONTIERS IN NEUROROBOTICS, 2022, 16