Self-Generated Dataset for Category and Pose Estimation of Deformable Object

被引:0
|
作者
Hou, Yew Cheong [1 ]
Sahari, Khairul Salleh Mohamed [1 ]
机构
[1] Univ Tenaga Nas Selangor, Dept Mech Engn, Kajang, Malaysia
关键词
Deformable object; robotic manipulation; computer vision; particle based model;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work considers the problem of garment handling by a general household robot that focuses on the task of classification and pose estimation of a hanging garment in unfolding procedure. Classification and pose estimation of deformable objects such as garment are considered a challenging problem in autonomous robotic manipulation because these objects are in different sizes and can be deformed into different poses when manipulating them. Hence, we propose a self-generated synthetic dataset for classifying the category and estimating the pose of garment using a single manipulator. We present an approach to this problem by first constructing a garment mesh model into a piece of garment that crudely spread-out on the flat platform using particle based modeling and then the parameters such as landmarks and robotic grasping points can be estimated from the garment mesh model. Later, the spread-out garment is picked up by a single robotic manipulator and the 2D garment mesh model is simulated in 3D virtual environment. A dataset of hanging garment can be generated by capturing the depth images of real garment at the robotic platform and also the images of garment mesh model from offline simulation respectively. The synthetic dataset collected from simulation shown the approach performed well and applicable on a different of similar garment. Thus, the category and pose recognition of the garment can be further developed.
引用
收藏
页码:232 / 235
页数:4
相关论文
共 50 条
  • [31] Open-Vocabulary Category-Level Object Pose and Size Estimation
    Cai, Junhao
    He, Yisheng
    Yuan, Weihao
    Zhu, Siyu
    Dong, Zilong
    Bo, Liefeng
    Chen, Qifeng
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7661 - 7668
  • [32] Weakly electric fish use self-generated motion to discriminate object shape
    Skeels, Sarah
    von der Emde, Gerhard
    de Perera, Theresa Burt
    ANIMAL BEHAVIOUR, 2023, 205 : 47 - 63
  • [33] Self-Supervised Category-Level 6D Object Pose Estimation With Optical Flow Consistency
    Zaccaria, Michela
    Manhardt, Fabian
    Di, Yan
    Tombari, Federico
    Aleotti, Jacopo
    Giorgini, Mikhail
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 2510 - 2517
  • [34] The autogenic (self-generated) massacre
    Mullen, PE
    BEHAVIORAL SCIENCES & THE LAW, 2004, 22 (03) : 311 - 323
  • [35] What are self-generated actions?
    Schueuer, Friederike
    Haggard, Patrick
    CONSCIOUSNESS AND COGNITION, 2011, 20 (04) : 1697 - 1704
  • [36] THERMOGRAVIMETRY IN SELF-GENERATED ATMOSPHERES
    GARN, PD
    KESSLER, JE
    ANALYTICAL CHEMISTRY, 1960, 32 (12) : 1563 - 1565
  • [37] Omni6D: Large-Vocabulary 3D Object Dataset for Category-Level 6D Object Pose Estimation
    Zhang, Mengchen
    Wu, Tong
    Wang, Tai
    Wang, Tengfei
    Liu, Ziwei
    Lin, Dahua
    COMPUTER VISION - ECCV 2024, PT XXV, 2025, 15083 : 216 - 232
  • [38] Multi-sensor 3D object dataset for object recognition with full pose estimation
    Alberto Garcia-Garcia
    Sergio Orts-Escolano
    Sergiu Oprea
    Jose Garcia-Rodriguez
    Jorge Azorin-Lopez
    Marcelo Saval-Calvo
    Miguel Cazorla
    Neural Computing and Applications, 2017, 28 : 941 - 952
  • [39] Multi-sensor 3D object dataset for object recognition with full pose estimation
    Garcia-Garcia, Alberto
    Orts-Escolano, Sergio
    Oprea, Sergiu
    Garcia-Rodriguez, Jose
    Azorin-Lopez, Jorge
    Saval-Calvo, Marcelo
    Cazorla, Miguel
    NEURAL COMPUTING & APPLICATIONS, 2017, 28 (05): : 941 - 952
  • [40] RANSAC Optimization for Category-level 6D Object Pose Estimation
    Chen, Ying
    Kang, Guixia
    Wang, Yiping
    2020 5TH INTERNATIONAL CONFERENCE ON MECHANICAL, CONTROL AND COMPUTER ENGINEERING (ICMCCE 2020), 2020, : 50 - 56