DeRi-Bot: Learning to Collaboratively Manipulate Rigid Objects via Deformable Objects

被引:0
|
作者
Wang Z. [1 ]
Qureshi A.H. [1 ]
机构
[1] Purdue University, Department of Computer Science, West Lafayette, 47907, IN
来源
IEEE Robotics and Automation Letters | 2023年 / 8卷 / 10期
关键词
deep learning; manipulation; Soft-rigid body;
D O I
10.1109/LRA.2023.3307003
中图分类号
学科分类号
摘要
Recent research efforts have yielded significant advancements in manipulating objects under homogeneous settings where the robot is required to either manipulate rigid or deformable (soft) objects. However, the manipulation under heterogeneous setups that involve both rigid and one-dimensional (1D) deformable objects remains an unexplored area of research. Such setups are common in various scenarios that involve the transportation of heavy objects via ropes, e.g., on factory floors, at disaster sites, and in forestry. To address this challenge, we introduce DeRi-Bot, the first framework that enables the collaborative manipulation of rigid objects with deformable objects. Our framework comprises an Action Prediction Network (APN) and a Configuration Prediction Network (CPN) to model the complex pattern and stochasticity of soft-rigid body systems. We demonstrate the effectiveness of DeRi-Bot in moving rigid objects to a target position with ropes connected to robotic arms. Furthermore, DeRi-Bot is a distributive method that can accommodate an arbitrary number of robots or human partners without reconfiguration or retraining. We evaluate our framework in both simulated and real-world environments and show that it achieves promising results with strong generalization across different types of objects and multi-agent settings, including human-robot collaboration. © 2016 IEEE.
引用
收藏
页码:6355 / 6362
页数:7
相关论文
共 50 条
  • [11] Efficient interaction model between rigid and deformable objects
    Caby, C
    Crosnier, A
    SIMULATION IN INDUSTRY 2001, 2001, : 145 - 149
  • [12] FuseBot: mechanical search of rigid and deformable objects via multi-modal perception
    Tara Boroushaki
    Laura Dodds
    Nazish Naeem
    Fadel Adib
    Autonomous Robots, 2023, 47 : 1137 - 1154
  • [13] FuseBot: mechanical search of rigid and deformable objects via multi-modal perception
    Boroushaki, Tara
    Dodds, Laura
    Naeem, Nazish
    Adib, Fadel
    AUTONOMOUS ROBOTS, 2023, 47 (08) : 1137 - 1154
  • [14] Robotic Assembly of Deformable Linear Objects via Curriculum Reinforcement Learning
    Wu, Kai
    Chen, Rongkang
    Chen, Qi
    Li, Weihua
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (05): : 4770 - 4777
  • [15] Excavation Learning for Rigid Objects in Clutter
    Lu, Qingkai
    Zhang, Liangjun
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04): : 7373 - 7380
  • [16] Design of a flexible tactile sensor for classification of rigid and deformable objects
    Drimus, Alin
    Kootstra, Gert
    Bilberg, Arne
    Kragic, Danica
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2014, 62 (01) : 3 - 15
  • [17] Modeling deformable objects using local rigid body simulation
    Chen W.
    Zhu L.
    Zhang X.
    International Journal of Computers and Applications, 2020, 42 (05): : 439 - 448
  • [18] Collision detection and modeling of rigid and deformable objects in laparoscopic simulator
    Dy, Mary-Clare
    Tagawa, Kazuyoshi
    Tanaka, Hiromi T.
    Komori, Masaru
    MEDICAL IMAGING 2015: IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING, 2015, 9415
  • [19] Grip Force Control During Virtual Interaction With Deformable and Rigid Objects Via a Haptic Gripper
    Milstein, Amit
    Alyagon, Lital
    Nisky, Ilana
    IEEE TRANSACTIONS ON HAPTICS, 2021, 14 (03) : 564 - 576
  • [20] Autonomous Manipulation Learning for Similar Deformable Objects via Only One Demonstration
    Ren, Yu
    Chen, Ronghan
    Cong, Yang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 17069 - 17078