Multimodal Features and Accurate Place Recognition With Robust Optimization for Lidar-Visual-Inertial SLAM

被引:3
|
作者
Zhao, Xiongwei [1 ]
Wen, Congcong [2 ]
Manoj Prakhya, Sai [3 ]
Yin, Hongpei [4 ]
Zhou, Rundong [5 ]
Sun, Yijiao [1 ]
Xu, Jie [6 ]
Bai, Haojie [1 ]
Wang, Yang [1 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Elect & Informat Engn, Shenzhen 518071, Peoples R China
[2] NYU, Tandon Sch Engn, New York, NY 10012 USA
[3] Huawei Munich Res Ctr, D-80992 Munich, Germany
[4] Guangdong Inst Artificial Intelligence & Adv Comp, Guangzhou 510535, Peoples R China
[5] Harbin Inst Technol, Sch Elect & Informat Engn, Harbin 150001, Peoples R China
[6] Harbin Inst Technol, Sch Mech & Elect Engn, Harbin 150001, Peoples R China
关键词
Laser radar; Simultaneous localization and mapping; Visualization; Feature extraction; Optimization; Robot sensing systems; Three-dimensional displays; 3-D lidar loop closure descriptor; lidar-visual-inertial simultaneous localization and mapping (SLAM) (LVINS); robust iterative optimization; state estimation; two-stage loop detection; LINE SEGMENT DETECTOR; REAL-TIME; DESCRIPTOR;
D O I
10.1109/TIM.2024.3370762
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Lidar-visual-inertial simultaneous localization and mapping (SLAM) (LVINS) provides a compelling solution for accurate and robust state estimation and mapping, integrating complementary information from the multisensor data. However, in the front-end processing of existing LVINS systems, methods based on the visual line feature matching typically suffer from low accuracy and are time consuming. In addition, the back-end optimization of current multisensor fusion SLAM systems is adversely affected by feature association outliers, which constrains further enhancements in localization precision. In the loop closure process, the existing lidar loop closure descriptors, relying primarily on 2-D information from point clouds, often fall short in complex environments. To effectively tackle these challenges, we introduce the multimodal feature-based LVINS framework, abbreviated as MMF-LVINS. Our framework consists of three major innovations. First, we propose a novel coarse-to-fine (CTF) visual line matching method that utilizes geometric descriptor similarity and optical flow verification, substantially improving both efficiency and accuracy of line feature matching. Second, we present a robust iterative optimization approach featuring a newly proposed adaptive loss function. This function is tailored based on the quality of feature association and incorporates graduated nonconvexity, thereby reducing the impact of outliers on system accuracy. Third, to augment the precision of lidar-based loop closure detection, we introduce an innovative 3-D lidar descriptor that captures spatial, height, and intensity information from the point cloud. We also propose a two-stage place recognition module that synergistically combines both visual and this new lidar descriptor, significantly diminishing cumulative drift. Extensive experimental evaluations on six real-world datasets, including EuRoc, KITTI, NCLT, M2DGR, UrbanNav, and UrbanLoco, demonstrate that our MMF-LVINS system achieves superior state estimation accuracy compared with the existing state-of-the-art methods. These experiments also validate the effectiveness of our advanced techniques in visual line matching, robust iterative optimization, and enhanced lidar loop closure detection.
引用
收藏
页码:1 / 1
页数:16
相关论文
共 50 条
  • [1] LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme
    Liu, Zhenbin
    Li, Zengke
    Liu, Ao
    Shao, Kefan
    Guo, Qiang
    Wang, Chuanhao
    REMOTE SENSING, 2024, 16 (09)
  • [2] A real time LiDAR-Visual-Inertial object level semantic SLAM for forest environments
    Liu, Hongwei
    Xu, Guoqi
    Liu, Bo
    Li, Yuanxin
    Yang, Shuhang
    Tang, Jie
    Pan, Kai
    Xing, Yanqiu
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2025, 219 : 71 - 90
  • [3] LiDAR-Visual-Inertial Odometry Based on Optimized Visual Point-Line Features
    He, Xuan
    Gao, Wang
    Sheng, Chuanzhen
    Zhang, Ziteng
    Pan, Shuguo
    Duan, Lijin
    Zhang, Hui
    Lu, Xinyu
    REMOTE SENSING, 2022, 14 (03)
  • [4] Detection-first tightly-coupled LiDAR-Visual-Inertial SLAM in dynamic environments
    Xu, Xiaobin
    Hu, Jinchao
    Zhang, Lei
    Cao, Chenfei
    Yang, Jian
    Ran, Yingying
    Tan, Zhiying
    Xu, Linsen
    Luo, Minzhou
    MEASUREMENT, 2025, 239
  • [5] Vanishing Point Aided LiDAR-Visual-Inertial Estimator
    Wang, Peng
    Fang, Zheng
    Zhao, Shibo
    Chen, Yongnan
    Zhou, Ming
    An, Shan
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 13120 - 13126
  • [6] Development of tightly coupled based lidar-visual-inertial odometry
    Kim K.-W.
    Jung T.-K.
    Seo S.-H.
    Jee G.-I.
    Journal of Institute of Control, Robotics and Systems, 2020, 26 (08) : 597 - 603
  • [7] Dynam-SLAM: An Accurate, Robust Stereo Visual-Inertial SLAM Method in Dynamic Environments
    Yin, Hesheng
    Li, Shaomiao
    Tao, Yu
    Guo, Junlong
    Huang, Bo
    IEEE TRANSACTIONS ON ROBOTICS, 2022,
  • [8] Dynam-SLAM: An Accurate, Robust Stereo Visual-Inertial SLAM Method in Dynamic Environments
    Yin, Hesheng
    Li, Shaomiao
    Tao, Yu
    Guo, Junlong
    Huang, Bo
    IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (01) : 289 - 308
  • [9] Fusion consistency for industrial robot navigation: An integrated SLAM framework with multiple 2D LiDAR-visual-inertial sensors
    Nam, Dinh Van
    Danh, Phan Thanh
    Park, Chung Huyk
    Kim, Gon-Woo
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 120
  • [10] F-LVINS: Flexible Lidar-Visual-Inertial Odometry Systems
    Tang, Xiang-Shi
    Cheng, Teng-Hu
    IEEE ACCESS, 2023, 11 : 104028 - 104037