A Lightweight Two-End Feature Fusion Network for Object 6D Pose Estimation

被引:1
|
作者
Zuo, Ligang [1 ]
Xie, Lun [1 ]
Pan, Hang [1 ]
Wang, Zhiliang [1 ]
机构
[1] Univ Sci & Technol Beijing, Sch Comp & Commun Engn, Beijing 100083, Peoples R China
基金
北京市自然科学基金; 国家重点研发计划;
关键词
object pose estimation; two-end feature fusion; CNN; PointNet; PointNet plus plus; depthwise separable convolution;
D O I
10.3390/machines10040254
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Currently, many methods of object pose estimation use images or point clouds alone for pose estimation. This leads to their inability to accurately estimate the object pose in the case of occlusion and poor illumination. Second, these models have large parameters and cannot be deployed on mobile devices. Therefore, we propose a lightweight two-terminal feature fusion network, which can effectively use images and point clouds for accurate object pose estimation. First, Pointno problemNet network is used to extract point cloud features. Then the extracted point cloud features are combined with the images at pixel level and the features are extracted by CNN. Secondly, the extracted image features are combined with the point cloud point by point. Then feature extraction is performed on it using the improved PointNet++ network. Finally, a set of center point features are obtained and pose estimation is performed for each feature. The pose with the highest confidence is selected as the final result. Furthermore, we apply depthwise separable convolutions to reduce the amount of model parameters. Experiments show that the proposed method exhibits better performance on Linemod and Occlusion Linemod datasets. Furthermore, the model parameters are small, and it is robust in occlusion and low-light situations.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Binocular vision object 6D pose estimation based on circulatory neural network
    Yang H.
    Li Z.
    Kang Z.-Y.
    Tian B.
    Dong Q.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2023, 57 (11): : 2179 - 2187
  • [32] Attention-guided RGB-D Fusion Network for Category-level 6D Object Pose Estimation
    Wang, Hao
    Li, Weiming
    Kim, Jiyeon
    Wang, Qiang
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 10651 - 10658
  • [33] An efficient lightweight deep neural network for real-time object 6D pose estimation with RGB-D inputs
    Liang, Yu
    Chen, Fan
    Liang, Guoyuan
    Wu, Xinyu
    Feng, Wei
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [34] FFB6D: A Full Flow Bidirectional Fusion Network for 6D Pose Estimation
    He, Yisheng
    Huang, Haibin
    Fan, Haoqiang
    Chen, Qifeng
    Sun, Jian
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3002 - 3012
  • [35] DRNet: A Depth-Based Regression Network for 6D Object Pose Estimation
    Jin, Lei
    Wang, Xiaojuan
    He, Mingshu
    Wang, Jingyue
    SENSORS, 2021, 21 (05) : 1 - 15
  • [36] SilhoNet: An RGB Method for 6D Object Pose Estimation
    Billings, Gideon
    Johnson-Roberson, Matthew
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04): : 3727 - 3734
  • [37] On Object Symmetries and 6D Pose Estimation from Images
    Pitteri, Giorgia
    Ramamonjisoa, Michael
    Ilic, Slobodan
    Lepetit, Vincent
    2019 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2019), 2019, : 614 - 622
  • [38] PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes
    Xiang, Yu
    Schmidt, Tanner
    Narayanan, Venkatraman
    Fox, Dieter
    ROBOTICS: SCIENCE AND SYSTEMS XIV, 2018,
  • [39] Confidence-Based 6D Object Pose Estimation
    Huang, Wei-Lun
    Hung, Chun-Yi
    Lin, I-Chen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 3025 - 3035
  • [40] PoseMatcher: One-shot 6D Object Pose Estimation by Deep Feature Matching
    Castro, Pedro
    Kim, Tae-Kyun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 2140 - 2149