Topology Prediction of Branched Deformable Linear Objects Using Deep Learning

被引:0
|
作者
Ouyang, Shengzhe [1 ]
Zuern, Manuel [1 ]
Zeh, Lukas [1 ]
Lechler, Armin [1 ]
Verl, Alexander [1 ]
机构
[1] Univ Stuttgart, Inst Control Engn & Mfg Units, D-70174 Stuttgart, Germany
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Wire; Splines (mathematics); Predictive models; Image segmentation; Topology; Robots; Deep learning; Computational modeling; Annotations; Data models; Machine vision; artificial intelligence; deep learning; synthetic dataset; transfer learning; branched deformable linear objects; WIRE HARNESSES;
D O I
10.1109/ACCESS.2024.3518634
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated wire harness handling can improve production efficiency, increase quality, and reduce assembly costs. However, due to deformation, there are an infinite number of possible wire harness configurations, making wire harness perception a challenge. Deep learning is a popular method for computer vision but lacks datasets, models, and experiments for wire harness perception. Therefore, this paper presents a novel deep learning model to predict the configuration of a wire harness using artificially generated datasets mixed with real annotated data. The model predicts keypoints which are interpolated as cubic splines to represent the wire harness configuration with reduced degrees of freedom. We benchmark our novel model against YOLOv8-Pose and experiment with different possibilities for predicting the wire harness. As a result, our proposed approach achieves mAP@50-95 of 89.8%, which could further be integrated into robotic systems to improve the automation and precision of robotic wire harness handling.
引用
收藏
页码:194399 / 194411
页数:13
相关论文
共 50 条
  • [41] DexDLO: Learning Goal-Conditioned Dexterous Policy for Dynamic Manipulation of Deformable Linear Objects
    University of Edinburgh, School of Informatics, United Kingdom
    不详
    Proc IEEE Int Conf Rob Autom, 2024, (16009-16015):
  • [42] DexDLO: Learning Goal-Conditioned Dexterous Policy for Dynamic Manipulation of Deformable Linear Objects
    Zhaole, Sun
    Zhu, Jihong
    Fisher, Robert B.
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2024), 2024, : 16009 - 16015
  • [43] An Interactive Simulator for Deformable Linear Objects Manipulation Planning
    Alvarez, Nahum
    Yamazaki, Kimitoshi
    2016 IEEE INTERNATIONAL CONFERENCE ON SIMULATION, MODELING, AND PROGRAMMING FOR AUTONOMOUS ROBOTS (SIMPAR), 2016, : 259 - 264
  • [44] FASTDLO: Fast Deformable Linear Objects Instance Segmentation
    Caporali, Alessio
    Galassi, Kevin
    Zanella, Riccardo
    Palli, Gianluca
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04): : 9075 - 9082
  • [45] Motion planning for robotic manipulation of deformable linear objects
    Saha, Mitul
    Isto, Pekka
    Latombe, Jean-Claude
    EXPERIMENTAL ROBOTICS, 2008, 39 : 23 - +
  • [46] Manipulation of Deformable Linear Objects in Benchmark Task Spaces
    Chang, Peng
    Luo, Rui
    Zolotas, Mark
    Padir, Taskin
    2022 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2022, : 1910 - 1916
  • [47] Dynamic Modeling and Simulation of Manipulating Deformable Linear Objects
    Huang, Jian
    Di, Pei
    Fukuda, Toshio
    Matsuno, Takayuki
    2008 INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION: (ICMA), VOLS 1 AND 2, 2008, : 857 - +
  • [48] Motion planning for robotic manipulation of deformable linear objects
    Saha, Mitul
    Isto, Pekka
    2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-10, 2006, : 2478 - +
  • [49] Improving the topology prediction of α-helical transmembrane proteins with deep transfer learning
    Wang, Lei
    Zhong, Haolin
    Xue, Zhidong
    Wang, Yan
    Computational and Structural Biotechnology Journal, 2022, 20 : 1993 - 2000
  • [50] Computational Acceleration of Topology Optimization Using Deep Learning
    Rasulzade, Jalal
    Rustamov, Samir
    Akhmetov, Bakytzhan
    Maksum, Yelaman
    Nogaibayeva, Makpal
    APPLIED SCIENCES-BASEL, 2023, 13 (01):