A Lightweight Two-End Feature Fusion Network for Object 6D Pose Estimation

被引:1
|
作者
Zuo, Ligang [1 ]
Xie, Lun [1 ]
Pan, Hang [1 ]
Wang, Zhiliang [1 ]
机构
[1] Univ Sci & Technol Beijing, Sch Comp & Commun Engn, Beijing 100083, Peoples R China
基金
北京市自然科学基金; 国家重点研发计划;
关键词
object pose estimation; two-end feature fusion; CNN; PointNet; PointNet plus plus; depthwise separable convolution;
D O I
10.3390/machines10040254
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Currently, many methods of object pose estimation use images or point clouds alone for pose estimation. This leads to their inability to accurately estimate the object pose in the case of occlusion and poor illumination. Second, these models have large parameters and cannot be deployed on mobile devices. Therefore, we propose a lightweight two-terminal feature fusion network, which can effectively use images and point clouds for accurate object pose estimation. First, Pointno problemNet network is used to extract point cloud features. Then the extracted point cloud features are combined with the images at pixel level and the features are extracted by CNN. Secondly, the extracted image features are combined with the point cloud point by point. Then feature extraction is performed on it using the improved PointNet++ network. Finally, a set of center point features are obtained and pose estimation is performed for each feature. The pose with the highest confidence is selected as the final result. Furthermore, we apply depthwise separable convolutions to reduce the amount of model parameters. Experiments show that the proposed method exhibits better performance on Linemod and Occlusion Linemod datasets. Furthermore, the model parameters are small, and it is robust in occlusion and low-light situations.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Single Shot 6D Object Pose Estimation
    Kleeberger, Kilian
    Huber, Marco F.
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 6239 - 6245
  • [22] BOP: Benchmark for 6D Object Pose Estimation
    Hodan, Tomas
    Michel, Frank
    Brachmann, Eric
    Kehl, Wadim
    Buch, Anders Glent
    Kraft, Dirk
    Drost, Bertram
    Vidal, Joel
    Ihrke, Stephan
    Zabulis, Xenophon
    Sahin, Caner
    Manhardt, Fabian
    Tombari, Federico
    Kim, Tae-Kyun
    Matas, Jiri
    Rother, Carsten
    COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 : 19 - 35
  • [23] Survey on 6D Pose Estimation of Rigid Object
    Chen, Jiale
    Zhang, Lijun
    Liu, Yi
    Xu, Chi
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 7440 - 7445
  • [24] An efficient network for category-level 6D object pose estimation
    Shantong Sun
    Rongke Liu
    Shuqiao Sun
    Xinxin Yang
    Guangshan Lu
    Signal, Image and Video Processing, 2021, 15 : 1643 - 1651
  • [25] An efficient network for category-level 6D object pose estimation
    Sun, Shantong
    Liu, Rongke
    Sun, Shuqiao
    Yang, Xinxin
    Lu, Guangshan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (07) : 1643 - 1651
  • [26] NeRF-Feat: 6D Object Pose Estimation using Feature Rendering
    Vutukur, Shishir Reddy
    Brock, Heike
    Busam, Benjamin
    Birdal, Tolga
    Hutter, Andreas
    Ilic, Slobodan
    2024 INTERNATIONAL CONFERENCE IN 3D VISION, 3DV 2024, 2024, : 1146 - 1155
  • [27] 6D Object Pose Estimation with Attention Aware Bi-gated Fusion
    Wang, Laichao
    Lu, Weiding
    Tian, Yuan
    Guan, Yong
    Shao, Zhenzhou
    Shi, Zhiping
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT II, 2024, 14448 : 573 - 585
  • [28] A modal fusion network with dual attention mechanism for 6D pose estimation
    Wei, Liangrui
    Xie, Feifei
    Sun, Lin
    Chen, Jinpeng
    Zhang, Zhipeng
    VISUAL COMPUTER, 2024, 40 (10): : 7411 - 7425
  • [29] HFT6D: Multimodal 6D object pose estimation based on hierarchical feature transformer
    An, Yunnan
    Yang, Dedong
    Song, Mengyuan
    MEASUREMENT, 2024, 224
  • [30] BDR6D: Bidirectional Deep Residual Fusion Network for 6D Pose Estimation
    Liu, Penglei
    Zhang, Qieshi
    Cheng, Jun
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (02) : 1793 - 1804