Learning Generalizable Dexterous Manipulation from Human Grasp Affordance

被引:0
|
作者
Wu, Yueh-Hua [1 ]
Wang, Jiashun [1 ]
Wang, Xiaolong [1 ]
机构
[1] Univ Calif San Diego, La Jolla, CA 92093 USA
来源
关键词
Dexterous manipulation; Affordance modeling; Imitation learning; HAND MANIPULATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dexterous manipulation with a multi-finger hand is one of the most challenging problems in robotics. While recent progress in imitation learning has largely improved the sample efficiency compared to Reinforcement Learning, the learned policy can hardly generalize to manipulate novel objects, given limited expert demonstrations. In this paper, we propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category, which are generated from a human grasp affordance model. This generalizes the policy to novel object instances within the same category. To train the policy, we propose a novel imitation learning objective jointly with a geometric representation learning objective using our demonstrations. By experimenting with relocating diverse objects in simulation, we show that our approach outperforms baselines with a large margin when manipulating novel objects. We also ablate the importance of 3D object representation learning for manipulation. We include videos and code on the project website - https://kristery.github.io/ILAD/.
引用
收藏
页码:618 / 629
页数:12
相关论文
共 50 条
  • [41] Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
    Rajeswaran, Aravind
    Kumar, Vikash
    Gupta, Abhishek
    Vezzani, Giulia
    Schulman, John
    Todorov, Emanuel
    Levine, Sergey
    ROBOTICS: SCIENCE AND SYSTEMS XIV, 2018,
  • [42] Learning Dexterous Manipulation from Exemplar Object Trajectories and Pre-Grasps
    Dasari, Sudeep
    Gupta, Abhinav
    Kumar, Vikash
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 3889 - 3896
  • [43] State-Only Imitation Learning for Dexterous Manipulation
    Radosavovic, Ilija
    Wang, Xiaolong
    Pinto, Lerrel
    Malik, Jitendra
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 7865 - 7871
  • [44] Efficient Learning of Grasp Selection for Five-Finger Dexterous Hand
    Yuan, Hao
    Li, Dongxu
    Wu, Jun
    2017 IEEE 7TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2017, : 1101 - 1106
  • [45] Learning to Grasp Familiar Objects Based on Experience and Objects' Shape Affordance
    Liu, Chunfang
    Fang, Bin
    Sun, Fuchun
    Li, Xiaoli
    Huang, Wenbing
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2019, 49 (12): : 2710 - 2723
  • [46] DexTOG: Learning Task-Oriented Dexterous Grasp With Language Condition
    Zhang, Jieyi
    Xu, Wenqiang
    Yu, Zhenjun
    Xie, Pengfei
    Tang, Tutian
    Lu, Cewu
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (02): : 995 - 1002
  • [47] Dexterous Imitation Made Easy: A Learning-Based Framework for Efficient Dexterous Manipulation
    Arunachalam, Sridhar Pandian
    Silwal, Sneha
    Evans, Ben
    Pinto, Lerrel
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 5954 - 5961
  • [48] Learning Deep Visuomotor Policies for Dexterous Hand Manipulation
    Jain, Divye
    Li, Andrew
    Singhal, Shivam
    Rajeswaran, Aravind
    Kumar, Vikash
    Todorov, Emanuel
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 3636 - 3643
  • [49] FMB: A functional manipulation benchmark for generalizable robotic learning
    Luo, Jianlan
    Xu, Charles
    Liu, Fangchen
    Tan, Liam
    Lin, Zipeng
    Wu, Jeffrey
    Abbeel, Pieter
    Levine, Sergey
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2024,
  • [50] A global approach for dexterous manipulation planning using paths in n-fingers grasp subspace
    Saut, Jean-Philippe
    Sahbani, Anis
    Perdereau, Veronique
    2006 9TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, VOLS 1- 5, 2006, : 1646 - +