Learning Neural Force Manifolds for Sim2Real Robotic Symmetrical Paper Folding

被引:2
|
作者
Choi, Andrew [1 ]
Tong, Dezhong [2 ]
Terzopoulos, Demetri [3 ]
Joo, Jungseock [4 ,5 ]
Jawed, Mohammad Khalid [6 ]
机构
[1] Horizon Robot, Cupertino, CA 95014 USA
[2] Univ Michigan, Ann Arbor, MI 48109 USA
[3] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90095 USA
[4] Univ Calif Los Angeles, Dept Commun, Los Angeles, CA 90095 USA
[5] NVIDIA Corp, Santa Clara, CA 95051 USA
[6] Univ Calif Los Angeles, Dept Mech & Aerosp Engn, Los Angeles, CA 90095 USA
基金
美国国家科学基金会;
关键词
Deformable models; Deformable object manipulation; sim2real paper folding; data-driven models; closed-loop model-predictive control; ELASTIC RODS; STABILITY; OBJECTS;
D O I
10.1109/TASE.2024.3366909
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Robotic manipulation of slender objects is challenging, especially when the induced deformations are large and nonlinear. Traditionally, learning-based control approaches, such as imitation learning, have been used to address deformable material manipulation. These approaches lack generality and often suffer critical failure from a simple switch of material, geometric, and/or environmental (e.g., friction) properties. This article tackles a fundamental but difficult deformable manipulation task: forming a predefined fold in paper with only a single manipulator. A sim2real framework combining physically-accurate simulation and machine learning is used to train a deep neural network capable of predicting the external forces induced on the manipulated paper given a grasp position. We frame the problem using scaling analysis, resulting in a control framework robust against material and geometric changes. Path planning is then carried out over the generated "neural force manifold" to produce robot manipulation trajectories optimized to prevent sliding, with offline trajectory generation finishing 15x faster than previous physics-based folding methods. The inference speed of the trained model enables the incorporation of real-time visual feedback to achieve closed-loop model-predictive control. Realworld experiments demonstrate that our framework can greatly improve robotic manipulation performance compared to state-of-the-art folding strategies, even when manipulating paper objects of various materials and shapes. Note to Practitioners-This article is motivated by the need for efficient robotic folding strategies for stiff materials such as paper. Previous robot folding strategies have focused primarily on soft materials (e.g., cloth) possessing minimal bending resistance or relied on multiple complex manipulators and sensors, significantly increasing computational and monetary costs. In contrast, we formulate a robust, sim2real, physics-based method capable of folding papers of varying stiffness with a single manipulator. The proposed folding scheme is limited to papers of homogeneous material and folding along symmetric centerlines. Future work will involve formulating efficient methods for folding along arbitrary geometries and preexisting creases.
引用
收藏
页码:1483 / 1496
页数:14
相关论文
共 50 条
  • [41] Learning Nonprehensile Dynamic Manipulation: Sim2real Vision-Based Policy With a Surgical Robot
    Gondokaryono, Radian
    Haiderbhai, Mustafa
    Suryadevara, Sai Aneesh
    Kahrs, Lueder A.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (10) : 6763 - 6770
  • [42] Sim2Real Rope Cutting With a Surgical Robot Using Vision-Based Reinforcement Learning
    Haiderbhai, Mustafa
    Gondokaryono, Radian
    Wu, Andrew
    Kahrs, Lueder A.
    IEEE Transactions on Automation Science and Engineering, 2024, : 1 - 12
  • [43] Transition Control of a Double-Inverted Pendulum System Using Sim2Real Reinforcement Learning
    Lee, Taegun
    Ju, Doyoon
    Lee, Young Sam
    MACHINES, 2025, 13 (03)
  • [44] OBJECTFOLDER 2.0: A Multisensory Object Dataset for Sim2Real Transfer
    Gao, Ruohan
    Si, Zilin
    Chang, Yen-Yu
    Clarke, Samuel
    Bohg, Jeannette
    Li Fei-Fei
    Yuan, Wenzhen
    Wu, Jiajun
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10588 - 10598
  • [45] Exploring Generative AI for Sim2Real in Driving Data Synthesis
    Zhao, Haonan
    Wang, Yiting
    Bashford-Rogers, Thomas
    Donzella, Valentina
    Debattista, Kurt
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 3071 - 3077
  • [46] Sim2real transfer learning for 3D human pose estimation: motion to the rescue
    Doersch, Carl
    Zisserman, Andrew
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [47] Sim2Real Rope Cutting With a Surgical Robot Using Vision-Based Reinforcement Learning
    Haiderbhai, Mustafa
    Gondokaryono, Radian
    Wu, Andrew
    Kahrs, Lueder A.
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, : 1 - 12
  • [48] Sim2Real Object-Centric Keypoint Detection and Description
    Zhong, Chengliang
    Yang, Chao
    Sun, Fuchun
    Qi, Jinshan
    Mu, Xiaodong
    Liu, Huaping
    Huang, Wenbing
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 5440 - 5449
  • [49] Learn to Differ: Sim2Real Small Defection Segmentation Network
    Chen, Zexi
    Huang, Zheyuan
    Yu, Hongxiang
    Zhou, Zhongxiang
    Wang, Yunkai
    Xu, Xuecheng
    Tan, Qimeng
    Wang, Yue
    Xiong, Rong
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 1070 - 1077
  • [50] Self-Supervised Tumor Segmentation With Sim2Real Adaptation
    Zhang, Xiaoman
    Xie, Weidi
    Huang, Chaoqin
    Zhang, Ya
    Chen, Xin
    Tian, Qi
    Wang, Yanfeng
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (09) : 4373 - 4384