Efficient Monocular Coarse-to-Fine Object Pose Estimation

被引:0
|
作者
Feng, Rong [1 ]
Zhang, Hong [1 ]
机构
[1] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2S4, Canada
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The vision and robotics communities have developed different methods for object pose estimation, all of which have their disadvantages and advantages. A popular method saves all possible object model images from different viewpoints and their 2D-to-3D correspondences in database off-line. Then local feature matching is applied between the current view and the model images in the database. For the top matched image, the approach of a PnP algorithm followed by RANSAC is used to estimate object pose. Such a method has good accuracy, but lacks efficiency, consuming O(MN2) time where N and M are the number of features in a model and the number of models, respectively. To tackle this problem, we propose a method that improves the efficiency in two ways. First, we employ a hierarchical clustering method to find the proper number of model images to represent each object, leading to a decrease in M. Second, a coase-to-fine object pose estimation method is proposed, to decrease the time to find the best matching model image. Specifically, in the coarse step, given an image, the most similar model image is retrieved using a global image descriptor, which we compute using a pre-trained deep neural network. Then in the fine step, a local descriptor feature matching method is applied to find matching keypoints between current image and the model image found in the coarse step. Finally, with pre-registered 2D-to-3D correspondences for each model, an accurate object pose is calculated using the PnP and RANSAC approach. The performance of our method is evaluated on the Amazon Picking Challenge dataset.
引用
收藏
页码:1617 / 1622
页数:6
相关论文
共 50 条
  • [1] Coarse-to-fine Animal Pose and Shape Estimation
    Li, Chen
    Lee, Gim Hee
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] DPDFormer: A Coarse-to-Fine Model for Monocular Depth Estimation
    Liu, Chunpu
    Yang, Guanglei
    Zuo, Wangmeng
    Zang, Tianyi
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (05)
  • [3] PoseDiffusion: A Coarse-to-Fine Framework for Unseen Object 6-DoF Pose Estimation
    Zhou, Jiaming
    Zhu, Qing
    Wang, Yaonan
    Feng, Mingtao
    Wu, Chengzhong
    Liu, Xuebing
    Huang, Jianan
    Mian, Ajmal
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (09) : 11127 - 11138
  • [4] Coarse-to-fine Planar Regularization for Dense Monocular Depth Estimation
    Liwicki, Stephan
    Zach, Christopher
    Miksik, Ondrej
    Torr, Philip H. S.
    [J]. COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 : 458 - 474
  • [5] Coarse-to-Fine 3D Human Pose Estimation
    Guo, Yu
    Zhao, Lin
    Zhang, Shanshan
    Yang, Jian
    [J]. IMAGE AND GRAPHICS, ICIG 2019, PT III, 2019, 11903 : 579 - 592
  • [6] Chfnet: a coarse-to-fine hierarchical refinement model for monocular depth estimation
    Chen, Han
    Wang, Yongxiong
    [J]. MACHINE VISION AND APPLICATIONS, 2024, 35 (04)
  • [7] Coarse-to-Fine Hand-Object Pose Estimation with Interaction-Aware Graph Convolutional Network
    Zhang, Maomao
    Li, Ao
    Liu, Honglei
    Wang, Minghui
    [J]. SENSORS, 2021, 21 (23)
  • [8] A deep Coarse-to-Fine network for head pose estimation from synthetic data
    Wang, Yujia
    Liang, Wei
    Shen, Jianbing
    Jia, Yunde
    Yu, Lap-Fai
    [J]. PATTERN RECOGNITION, 2019, 94 : 196 - 206
  • [9] A Multiscale Coarse-to-Fine Human Pose Estimation Network With Hard Keypoint Mining
    Jiang, Xiaoyan
    Tao, Hangyu
    Hwang, Jenq-Neng
    Fang, Zhijun
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2024, 54 (03): : 1730 - 1741
  • [10] Learning Coarse-to-Fine Sparselets for Efficient Object Detection and Scene Classification
    Cheng, Gong
    Han, Junwei
    Guo, Lei
    Liu, Tianming
    [J]. 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 1173 - 1181