Point Cloud Feature Extraction Network Based on Multiscale Feature Dynamic Fusion

被引:1
|
作者
Liu, Jing [1 ,2 ]
Zhang, Yuan [1 ,2 ]
Zhang, Le [3 ]
Li, Bo [1 ,2 ]
Yang, Xiaowen [1 ,2 ]
机构
[1] North Univ China, Sch Data Sci & Technol, Taiyuan 030051, Shanxi, Peoples R China
[2] Shanxi Prov Key Lab Machine Vis & Virtual Real, Taiyuan 030051, Shanxi, Peoples R China
[3] North Automat Control Technol Inst, Dept Simulat Equipment, Taiyuan 030006, Shanxi, Peoples R China
关键词
point cloud registration; feature extraction; multi-scale feature; feature fusion; attention mechanism; EFFICIENT; ROBUST;
D O I
10.3788/LOP241237
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Accurate feature extraction in point cloud registration is often hindered by noise, surface complexity, overlap, and scale differences, which limit improvements in registration. To address this issue, this study proposes a point cloud registration algorithm based on the dynamic fusion of multiscale features. First, by employing sparse convolution operations at different depths, multilevel scale feature information is extracted from the point cloud data, obtaining rich levels of detail from local and global structures. Subsequently, the multilevel scale features are concatenated to form a fused feature representation, which enhances the integrity and accuracy of features. Additionally, the algorithm introduces a squeezeexcitation attention mechanism for the network skip connections to adaptively learn and reinforce important feature information. Concurrently, a global context module is integrated at the residual position to better capture global structural information. Finally, registration is completed by estimating the rigid transformation matrix through the random sample consensus (RANSAC) algorithm. Experimental results demonstrate significant advantages in feature extraction and registration accuracy compared to mainstream methods, effectively improving the performance of point cloud registration.
引用
收藏
页数:11
相关论文
共 22 条
  • [1] PointNetLK: Robust & Efficient Point Cloud Registration using PointNet
    Aoki, Yasuhiro
    Goforth, Hunter
    Srivatsan, Rangaprasad Arun
    Lucey, Simon
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7156 - 7165
  • [2] D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features
    Bai, Xuyang
    Luo, Zixin
    Zhou, Lei
    Fu, Hongbo
    Quan, Long
    Tai, Chiew-Lan
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6358 - 6366
  • [3] Review: Deep Learning on 3D Point Clouds
    Bello, Saifullahi Aminu
    Yu, Shangshu
    Wang, Cheng
    Adam, Jibril Muhmmad
    Li, Jonathan
    [J]. REMOTE SENSING, 2020, 12 (11)
  • [4] GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond
    Cao, Yue
    Xu, Jiarui
    Lin, Stephen
    Wei, Fangyun
    Hu, Han
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 1971 - 1980
  • [5] SC2-PCR: A Second Order Spatial Compatibility for Efficient and Robust Point Cloud Registration
    Chen, Zhi
    Sun, Kun
    Yang, Fan
    Tao, Wenbing
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13211 - 13221
  • [6] Fully Convolutional Geometric Features
    Choy, Christopher
    Park, Jaesik
    Koltun, Vladlen
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8957 - 8965
  • [7] Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis
    Dai, Angela
    Qi, Charles Ruizhongtai
    Niessner, Matthias
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6545 - 6554
  • [8] The Perfect Match: 3D Point Cloud Matching with Smoothed Densities
    Gojcic, Zan
    Zhou, Caifa
    Wegner, Jan D.
    Wieser, Andreas
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5540 - 5549
  • [9] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/CVPR.2018.00745, 10.1109/TPAMI.2019.2913372]
  • [10] Huang X., 2021, arXiv, DOI [arXiv:2103.02690, DOI 10.48550/ARXIV.2103.02690]