6DoF Pose Estimation for Intricately-Shaped Object

被引:0
|
作者
Jiao, Tonghui [1 ]
Xia, Yanzhao [2 ]
Gao, Xiaosong [3 ]
Chen, Yongyu [1 ]
Zhao, Qunfei [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[2] Xi An Jiao Tong Univ, Sch Software Engn, Xian 710049, Shaanxi, Peoples R China
[3] Shanghai CRRC Intelligent Syst Co Ltd, Shanghai, Peoples R China
关键词
6DoF pose estimation; Intricately shaped objects; Robotic picking; CAD models; Template matching;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Quick and accurate estimation for the 6-DoF pose of a randomly arranged object in intricate shape plays an important role in robotic picking applications. In this paper we propose an approach based on template matching by using the aligned RGB-D image with prior knowledge to recover the 6-DoF pose of a randomly arranged object. First, the object's template database is generated with the help of a defined virtual imaging model and its CAD model. Then in the practical phase, we segment RGB-D image to get the mask representing the location of the object and then these data are modified into a comparable format with the characteristics of scale invariance. At last, a similar function with adjustable attention weight to color and depth data is defined to find Top-K matched templates. The selected matched templates are refined by ICP to generate the final answer. Experiments are conducted using an RGB-D camera and a robot arm to pick up given objects in intricate shape. The average recognition rate of the object in different poses is 97.826%. It also can work well with multiple objects randomly arranged with good masks representing the locations.
引用
收藏
页码:199 / 204
页数:6
相关论文
共 50 条
  • [1] Object aspect classification and 6DoF pose estimation
    Dede, Muhammet Ali
    Genc, Yakup
    [J]. IMAGE AND VISION COMPUTING, 2022, 124
  • [2] Spatial feature mapping for 6DoF object pose estimation
    Mei, Jianhan
    Jiang, Xudong
    Ding, Henghui
    [J]. PATTERN RECOGNITION, 2022, 131
  • [3] 6DoF Pose Estimation with Object Cutout based on a Deep Autoencoder
    Liu, Xin
    Zhang, Jichao
    He, Xian
    Song, Xiuqiang
    Qin, Xueying
    [J]. ADJUNCT PROCEEDINGS OF THE 2019 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR-ADJUNCT 2019), 2019, : 360 - 365
  • [4] Exploring Multiple Geometric Representations for 6DoF Object Pose Estimation
    Yang, Xu
    Cai, Junqi
    Li, Kunbo
    Fan, Xiumin
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (10) : 6115 - 6122
  • [5] Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation
    Merrill, Nathaniel
    Guo, Yuliang
    Zuo, Xingxing
    Huang, Xinyu
    Leutenegger, Stefan
    Peng, Xi
    Ren, Liu
    Huang, Guoquan
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14881 - 14890
  • [6] ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose Estimation
    Su, Yongzhi
    Saleh, Mahdi
    Fetzer, Torben
    Rambach, Jason
    Navab, Nassir
    Busam, Benjamin
    Stricker, Didier
    Tombari, Federico
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6728 - 6738
  • [7] "Recent Methods of 6DoF Pose Estimation"
    Akizuki S.
    [J]. Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers, 2019, 73 (02): : 210 - 213
  • [8] A Survey of 6DoF Object Pose Estimation Methods for Different Application Scenarios
    Guan, Jian
    Hao, Yingming
    Wu, Qingxiao
    Li, Sicong
    Fang, Yingjian
    [J]. SENSORS, 2024, 24 (04)
  • [9] A Benchmark Dataset for 6DoF Object Pose Tracking
    Wu, Po-Chen
    Lee, Yueh-Ying
    Tseng, Hung-Yu
    Ho, Hsuan-I
    Yang, Ming-Hsuan
    Chien, Shao-Yi
    [J]. ADJUNCT PROCEEDINGS OF THE 2017 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR-ADJUNCT), 2017, : 186 - 191
  • [10] Depth-based 6DoF Object Pose Estimation using Swin Transformer
    Li, Zhujun
    Stamos, Ioannis
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1185 - 1191