IS-CAT: Intensity-Spatial Cross-Attention Transformer for LiDAR-Based Place Recognition

被引:0
|
作者
Joo, Hyeong-Jun [1 ]
Kim, Jaeho [2 ]
机构
[1] Sejong Univ, Dept Informat & Commun Engn, Seoul 05006, South Korea
[2] Sejong Univ, Dept Elect Engn, Seoul 05006, South Korea
关键词
LiDAR place recognition; SLAM; cross-attention transformer network; IS-CAT; SCAN CONTEXT;
D O I
10.3390/s24020582
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
LiDAR place recognition is a crucial component of autonomous navigation, essential for loop closure in simultaneous localization and mapping (SLAM) systems. Notably, while camera-based methods struggle in fluctuating environments, such as weather or light, LiDAR demonstrates robustness against such challenges. This study introduces the intensity and spatial cross-attention transformer, which is a novel approach that utilizes LiDAR to generate global descriptors by fusing spatial and intensity data for enhanced place recognition. The proposed model leveraged a cross attention to a concatenation mechanism to process and integrate multi-layered LiDAR projections. Consequently, the previously unexplored synergy between spatial and intensity data was addressed. We demonstrated the performance of IS-CAT through extensive validation on the NCLT dataset. Additionally, we performed indoor evaluations on our Sejong indoor-5F dataset and demonstrated successful application to a 3D LiDAR SLAM system. Our findings highlight descriptors that demonstrate superior performance in various environments. This performance enhancement is evident in both indoor and outdoor settings, underscoring the practical effectiveness and advancements of our approach.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Context for LiDAR-based Place Recognition
    Li, Jiahao
    Qian, Hui
    Du, Xin
    2023 21ST INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS, ICAR, 2023, : 107 - 112
  • [2] CVTNet: A Cross-View Transformer Network for LiDAR-Based Place Recognition in Autonomous Driving Environments
    Ma, Junyi
    Xiong, Guangming
    Xu, Jingyi
    Chen, Xieyuanli
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 4039 - 4048
  • [3] Cross transformer for LiDAR-based loop closure detection
    Zheng, Rui
    Ren, Yang
    Zhou, Qi
    Ye, Yibin
    Zeng, Hui
    Machine Vision and Applications, 2025, 36 (01)
  • [4] Scene Overlap Prediction for LiDAR-Based Place Recognition
    Zhang, Yingjian
    Dai, Chenguang
    Zhou, Ruqin
    Zhang, Zhenchao
    Ji, Hongliang
    Fan, Huixin
    Zhang, Yongsheng
    Wang, Hanyun
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20 : 1 - 5
  • [5] Patchlpr: a multi-level feature fusion transformer network for LiDAR-based place recognition
    Sun, Yang
    Guo, Jianhua
    Wang, Haiyang
    Zhang, Yuhang
    Zheng, Jiushuai
    Tian, Bin
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (SUPPL 1) : 157 - 165
  • [6] OverlapTransformer: An Efficient and Yaw-Angle-Invariant Transformer Network for LiDAR-Based Place Recognition
    Ma, Junyi
    Zhang, Jun
    Xu, Jintao
    Ai, Rui
    Gu, Weihao
    Chen, Xieyuanli
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) : 6958 - 6965
  • [7] LiDAR-Based Place Recognition For Autonomous Driving: A Survey
    Zhang, Yongjun
    Shi, Pengcheng
    Li, Jiayuan
    ACM Computing Surveys, 2024, 57 (04)
  • [8] Spherical Transformer for LiDAR-based 3D Recognition
    Lai, Xin
    Chen, Yukang
    Lu, Fanbin
    Liu, Jianhui
    Jia, Jiaya
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 17545 - 17555
  • [9] Spatial-Spectral Transformer With Cross-Attention for Hyperspectral Image Classification
    Peng, Yishu
    Zhang, Yuwen
    Tu, Bing
    Li, Qianming
    Li, Wujing
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [10] Multimodal Personality Recognition using Cross-attention Transformer and Behaviour Encoding
    Agrawal, Tanay
    Agarwal, Dhruv
    Balazia, Michal
    Sinha, Neelabh
    Bremond, Francois
    PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2022, : 501 - 508