Part-Guided 3D RL for Sim2Real Articulated Object Manipulation

被引:1
|
作者
Xie, Pengwei [1 ]
Chen, Rui [2 ]
Chen, Siang [1 ,3 ]
Qin, Yuzhe [4 ]
Xiang, Fanbo [4 ]
Sun, Tianyu [1 ]
Xu, Jing [2 ]
Wang, Guijin [1 ,3 ]
Su, Hao [4 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Dept Mech Engn, Beijing 100084, Peoples R China
[3] Shanghai AI Lab, Shanghai 200232, Peoples R China
[4] Univ Calif San Diego, Dept Comp Sci & Engn, San Diego, CA 92037 USA
关键词
Deep learning in grasping and manipulation; RGB-D perception; reinforcement learning;
D O I
10.1109/LRA.2023.3313063
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Manipulating unseen articulated objects through visual feedback is a critical but challenging task for real robots. Existing learning-based solutions mainly focus on visual affordance learning or other pre-trained visual models to guide manipulation policies, which face challenges for novel instances in real-world scenarios. In this letter, we propose a novel part-guided 3D RL framework, which can learn to manipulate articulated objects without demonstrations. We combine the strengths of 2D segmentation and 3D RL to improve the efficiency of RL policy training. To improve the stability of the policy on real robots, we design a Frame-consistent Uncertainty-aware Sampling (FUS) strategy to get a condensed and hierarchical 3D representation. In addition, a single versatile RL policy can be trained on multiple articulated object manipulation tasks simultaneously in simulation and shows great generalizability to novel categories and instances. Experimental results demonstrate the effectiveness of our framework in both simulation and real-world settings.
引用
收藏
页码:7178 / 7185
页数:8
相关论文
共 50 条
  • [31] Automated Planning Encodings for the Manipulation of Articulated Objects in 3D with Gravity
    Bertolucci, Riccardo
    Capitanelli, Alessio
    Maratea, Marco
    Mastrogiovanni, Fulvio
    Vallati, Mauro
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, AI*IA 2019, 2019, 11946 : 135 - 150
  • [32] On Motor Performance in Virtual 3D Object Manipulation
    Kulik, Alexander
    Kunert, Andre
    Froehlich, Bernd
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (05) : 2041 - 2050
  • [33] 3D Object Manipulation in a Single Photograph using Stock 3D Models
    Kholgade, Natasha
    Simon, Tomas
    Efros, Alexei
    Sheikh, Yaser
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2014, 33 (04):
  • [34] Probabilistic 3D Multilabel Real-time Mapping for Multi-object Manipulation
    Wada, Kentaro
    Okada, Kei
    Inaba, Masayuki
    [J]. 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 5092 - 5099
  • [35] General-Purpose Sim2Real Protocol for Learning Contact-Rich Manipulation With Marker-Based Visuotactile Sensors
    Chen, Weihang
    Xu, Jing
    Xiang, Fanbo
    Yuan, Xiaodi
    Su, Hao
    Chen, Rui
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2024, 40 : 1509 - 1526
  • [36] A differential-geometric approach for 2D and 3D object grasping and manipulation
    Arimoto, Suguru
    [J]. ANNUAL REVIEWS IN CONTROL, 2007, 31 (02) : 189 - 209
  • [37] GOOD: A global orthographic object descriptor for 3D object recognition and manipulation
    Kasaei, S. Hamidreza
    Tome, Ana Maria
    Lopes, Luis Seabra
    Oliveira, Miguel
    [J]. PATTERN RECOGNITION LETTERS, 2016, 83 : 312 - 320
  • [38] Using digital twin to enhance Sim2real transfer for reinforcement learning in 3C assembly
    Mu, Weiwen
    Chen, Wenbai
    Zhou, Huaidong
    Liu, Naijun
    Shi, Haobin
    Li, Jingchen
    [J]. INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2024, 51 (01): : 125 - 133
  • [39] Robust shape estimation for 3D deformable object manipulation
    Han, Tao
    Zhao, Xuan
    Sun, Peigen
    Pan, Jia
    [J]. COMMUNICATIONS IN INFORMATION AND SYSTEMS, 2018, 18 (02) : 107 - 124
  • [40] Two-Handed 3D CAD Object Manipulation
    Fotouhi, Farshad
    [J]. IEEE MULTIMEDIA, 2013, 20 (04) : 96 - 95