Estimation of 6D Object Pose Using a 2D Bounding Box

被引:3
|
作者
Hong, Yong [1 ]
Liu, Jin [1 ]
Jahangir, Zahid [1 ]
He, Sheng [1 ]
Zhang, Qing [2 ]
机构
[1] Wuhan Univ, State Key Lab Informat Engn Surveying Mapping & R, Wuhan 430079, Peoples R China
[2] Harbin Engn Univ, Coll Intelligent Syst Sci & Engn, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
6D pose estimation; quaternion; Bounding Box Equation; LineMod;
D O I
10.3390/s21092939
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
This paper provides an efficient way of addressing the problem of detecting or estimating the 6-Dimensional (6D) pose of objects from an RGB image. A quaternion is used to define an object ' s three-dimensional pose, but the pose represented by q and the pose represented by -q are equivalent, and the L2 loss between them is very large. Therefore, we define a new quaternion pose loss function to solve this problem. Based on this, we designed a new convolutional neural network named Q-Net to estimate an object's pose. Considering that the quaternion ' s output is a unit vector, a normalization layer is added in Q-Net to hold the output of pose on a four-dimensional unit sphere. We propose a new algorithm, called the Bounding Box Equation, to obtain 3D translation quickly and effectively from 2D bounding boxes. The algorithm uses an entirely new way of assessing the 3D rotation (R) and 3D translation rotation (t) in only one RGB image. This method can upgrade any traditional 2D-box prediction algorithm to a 3D prediction model. We evaluated our model using the LineMod dataset, and experiments have shown that our methodology is more acceptable and efficient in terms of L2 loss and computational time.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] On Evaluation of 6D Object Pose Estimation
    Hodan, Tomas
    Matas, Jiri
    Obdrzalek, Stephan
    [J]. COMPUTER VISION - ECCV 2016 WORKSHOPS, PT III, 2016, 9915 : 606 - 619
  • [2] A review on object pose recovery: From 3D bounding box detectors to full 6D pose estimators
    Sahin, Caner
    Garcia-Hernando, Guillermo
    Sock, Juil
    Kim, Tae-Kyun
    [J]. IMAGE AND VISION COMPUTING, 2020, 96
  • [3] Learning 6D Object Pose Estimation Using 3D Object Coordinates
    Brachmann, Eric
    Krull, Alexander
    Michel, Frank
    Gumhold, Stefan
    Shotton, Jamie
    Rother, Carsten
    [J]. COMPUTER VISION - ECCV 2014, PT II, 2014, 8690 : 536 - 551
  • [4] BOP: Benchmark for 6D Object Pose Estimation
    Hodan, Tomas
    Michel, Frank
    Brachmann, Eric
    Kehl, Wadim
    Buch, Anders Glent
    Kraft, Dirk
    Drost, Bertram
    Vidal, Joel
    Ihrke, Stephan
    Zabulis, Xenophon
    Sahin, Caner
    Manhardt, Fabian
    Tombari, Federico
    Kim, Tae-Kyun
    Matas, Jiri
    Rother, Carsten
    [J]. COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 : 19 - 35
  • [5] Single Shot 6D Object Pose Estimation
    Kleeberger, Kilian
    Huber, Marco F.
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 6239 - 6245
  • [6] Survey on 6D Pose Estimation of Rigid Object
    Chen, Jiale
    Zhang, Lijun
    Liu, Yi
    Xu, Chi
    [J]. PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 7440 - 7445
  • [7] Learning to Predict 2D Object Instances by Applying Model-Based 6D Pose Estimation
    Kisner, Hannes
    Schreiter, Tim
    Thomas, Ulrike
    [J]. ADVANCES IN SERVICE AND INDUSTRIAL ROBOTICS, 2020, 980 : 496 - 504
  • [8] 6D pose estimation and 3D object reconstruction from 2D shape for robotic grasping of objects
    Wolnitza, Marcell
    Kaya, Osman
    Kulvicius, Tomas
    Woergoetter, Florentin
    Dellen, Babette
    [J]. 2022 SIXTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING, IRC, 2022, : 67 - 71
  • [9] On Object Symmetries and 6D Pose Estimation from Images
    Pitteri, Giorgia
    Ramamonjisoa, Michael
    Ilic, Slobodan
    Lepetit, Vincent
    [J]. 2019 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2019), 2019, : 614 - 622
  • [10] SilhoNet: An RGB Method for 6D Object Pose Estimation
    Billings, Gideon
    Johnson-Roberson, Matthew
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04): : 3727 - 3734