Prior Geometry Guided Direct Regression Network for Monocular 6D Object Pose Estimation

被引:0
|
作者
Liu, Chongpei [1 ]
Sun, Wei [1 ,3 ]
Zhang, Keyi [2 ]
Liu, Jian [1 ]
Zhang, Xing [1 ]
Fan, Shimeng [1 ]
机构
[1] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Hunan, Peoples R China
[2] Sichuan Univ Pittsburgh Inst, Chengdu 610207, Peoples R China
[3] Hunan Univ, Shenzhen Res Inst, Virtual Univ Pk, Shenzhen 518063, Peoples R China
基金
中国国家自然科学基金;
关键词
Object pose estimation; Prior geometry; Direct regression;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Monocular 6D object pose estimation aims to estimate 6 degrees of freedom pose of known objects, gaining attention. Correspondence-based methods are the mainstream methods. They analyze the geometric information in 2D RGB images and establish 2D-3D correspondences to calculate 6D pose. However, pose estimation accuracy suffers from that 2D RGB images can not provide enough geometric information. To solve this problem, We propose a novel prior geometry guided direct regression network (PGDRN), which fully uses the prior geometric knowledge contained in given object models. Precisely, we extract the prior feature from the object model and concatenate the color feature extracted from 2D images to construct the prior-color feature, aggregating the prior and viewpoint-specific geometric information, making our method's accuracy and robustness. Experiments on two well-known LM-O and YCB-V datasets show that our method significantly outperforms state-of-the-art (SOTA) methods.
引用
收藏
页码:6241 / 6246
页数:6
相关论文
共 50 条
  • [1] GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation
    Wang, Gu
    Manhardt, Fabian
    Tombari, Federico
    Ji, Xiangyang
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16606 - 16616
  • [2] Prior-information-guided corresponding point regression network for 6D pose estimation
    Gan, Haiqing
    Wang, Lihui
    Su, Yuzuwei
    Ruan, Wenjun
    Jiao, Xize
    COMPUTERS & GRAPHICS-UK, 2024, 121
  • [3] DRNet: A Depth-Based Regression Network for 6D Object Pose Estimation
    Jin, Lei
    Wang, Xiaojuan
    He, Mingshu
    Wang, Jingyue
    SENSORS, 2021, 21 (05) : 1 - 15
  • [4] 6D Object Pose Estimation With Color/Geometry Attention Fusion
    Yuan, Honglin
    Veltkamp, Remco C.
    16TH IEEE INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV 2020), 2020, : 529 - 535
  • [5] Graph neural network for 6D object pose estimation
    Yin, Pengshuai
    Ye, Jiayong
    Lin, Guoshen
    Wu, Qingyao
    KNOWLEDGE-BASED SYSTEMS, 2021, 218
  • [6] Generalizable and Accurate 6D Object Pose Estimation Network
    Fu, Shouxu
    Li, Xiaoning
    Yu, Xiangdong
    Cao, Lu
    Li, Xingxing
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT III, 2024, 14427 : 312 - 324
  • [7] Multi Task-Guided 6D Object Pose Estimation
    Thu-Uyen Nguyen
    Van-Duc Vu
    Van-Thiep Nguyen
    Ngoc-Anh Hoang
    Duy-Quang Vu
    Duc-Thanh Tran
    Khanh-Toan Phan
    Anh-Truong Mai
    Van-Hiep Duong
    Cong-Trinh Chan
    Ngoc-Trung Ho
    Quang-Tri Duong
    Phuc-Quan Ngo
    Dinh-Cuong Hoang
    PROCEEDINGS OF THE 2024 9TH INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATION TECHNOLOGY, ICIIT 2024, 2024, : 215 - 222
  • [8] A Pose Proposal and Refinement Network for Better 6D Object Pose Estimation
    Trabelsi, Ameni
    Chaabane, Mohamed
    Blanchard, Nathaniel
    Beveridge, Ross
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 2381 - 2390
  • [9] PointPoseNet: Point Pose Network for Robust 6D Object Pose Estimation
    Chen, Wei
    Duan, Jinming
    Basevi, Hector
    Chang, Hyung Jin
    Leonardis, Ales
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 2813 - 2822
  • [10] SLPRNet: A 6D Object Pose Regression Network by Sample Learning
    Zhang, Zheng
    Zhou, Xingru
    Liu, Houde
    ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 1233 - 1240