Multimodal Features and Accurate Place Recognition With Robust Optimization for Lidar-Visual-Inertial SLAM

被引:3
|
作者
Zhao, Xiongwei [1 ]
Wen, Congcong [2 ]
Manoj Prakhya, Sai [3 ]
Yin, Hongpei [4 ]
Zhou, Rundong [5 ]
Sun, Yijiao [1 ]
Xu, Jie [6 ]
Bai, Haojie [1 ]
Wang, Yang [1 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Elect & Informat Engn, Shenzhen 518071, Peoples R China
[2] NYU, Tandon Sch Engn, New York, NY 10012 USA
[3] Huawei Munich Res Ctr, D-80992 Munich, Germany
[4] Guangdong Inst Artificial Intelligence & Adv Comp, Guangzhou 510535, Peoples R China
[5] Harbin Inst Technol, Sch Elect & Informat Engn, Harbin 150001, Peoples R China
[6] Harbin Inst Technol, Sch Mech & Elect Engn, Harbin 150001, Peoples R China
关键词
Laser radar; Simultaneous localization and mapping; Visualization; Feature extraction; Optimization; Robot sensing systems; Three-dimensional displays; 3-D lidar loop closure descriptor; lidar-visual-inertial simultaneous localization and mapping (SLAM) (LVINS); robust iterative optimization; state estimation; two-stage loop detection; LINE SEGMENT DETECTOR; REAL-TIME; DESCRIPTOR;
D O I
10.1109/TIM.2024.3370762
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Lidar-visual-inertial simultaneous localization and mapping (SLAM) (LVINS) provides a compelling solution for accurate and robust state estimation and mapping, integrating complementary information from the multisensor data. However, in the front-end processing of existing LVINS systems, methods based on the visual line feature matching typically suffer from low accuracy and are time consuming. In addition, the back-end optimization of current multisensor fusion SLAM systems is adversely affected by feature association outliers, which constrains further enhancements in localization precision. In the loop closure process, the existing lidar loop closure descriptors, relying primarily on 2-D information from point clouds, often fall short in complex environments. To effectively tackle these challenges, we introduce the multimodal feature-based LVINS framework, abbreviated as MMF-LVINS. Our framework consists of three major innovations. First, we propose a novel coarse-to-fine (CTF) visual line matching method that utilizes geometric descriptor similarity and optical flow verification, substantially improving both efficiency and accuracy of line feature matching. Second, we present a robust iterative optimization approach featuring a newly proposed adaptive loss function. This function is tailored based on the quality of feature association and incorporates graduated nonconvexity, thereby reducing the impact of outliers on system accuracy. Third, to augment the precision of lidar-based loop closure detection, we introduce an innovative 3-D lidar descriptor that captures spatial, height, and intensity information from the point cloud. We also propose a two-stage place recognition module that synergistically combines both visual and this new lidar descriptor, significantly diminishing cumulative drift. Extensive experimental evaluations on six real-world datasets, including EuRoc, KITTI, NCLT, M2DGR, UrbanNav, and UrbanLoco, demonstrate that our MMF-LVINS system achieves superior state estimation accuracy compared with the existing state-of-the-art methods. These experiments also validate the effectiveness of our advanced techniques in visual line matching, robust iterative optimization, and enhanced lidar loop closure detection.
引用
收藏
页码:1 / 1
页数:16
相关论文
共 50 条
  • [41] Switch-SLAM: Switching-Based LiDAR-Inertial-Visual SLAM for Degenerate Environments
    Lee, Junwoon
    Komatsu, Ren
    Shinozaki, Mitsuru
    Kitajima, Toshihiro
    Asama, Hajime
    An, Qi
    Yamashita, Atsushi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (08): : 7270 - 7277
  • [42] Interval-based Visual-Inertial LiDAR SLAM with Anchoring Poses
    Ehambram, Aaronkumar
    Voges, Raphael
    Brenner, Claus
    Wagner, Bernardo
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7589 - 7596
  • [43] An accurate and robust visual-inertial positioning method
    Niu, Zhiyuan
    Ren, Yongjie
    Lin, Jiarui
    Ma, Keyao
    Zhu, Jigui
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (04)
  • [44] LVIO-Fusion:Tightly-Coupled LiDAR-Visual-Inertial Odometry and Mapping in Degenerate Environments
    Zhang, Hongkai
    Du, Liang
    Bao, Sheng
    Yuan, Jianjun
    Ma, Shugen
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (04) : 3783 - 3790
  • [45] LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
    Shan, Tixiao
    Englot, Brendan
    Ratti, Carlo
    Rus, Daniela
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 5692 - 5698
  • [46] R-LVIO: Resilient LiDAR-Visual-Inertial Odometry for UAVs in GNSS-denied Environment
    Zhang, Bing
    Shao, Xiangyu
    Wang, Yankun
    Sun, Guanghui
    Yao, Weiran
    DRONES, 2024, 8 (09)
  • [47] SLAM for Direct Optimization based Visual-Inertial Fusion
    Schwaab, M.
    Brohammer, E.
    Manoli, Y.
    2018 DGON INERTIAL SENSORS AND SYSTEMS (ISS), 2018,
  • [48] Monocular Visual SLAM with Robust and Efficient Line Features
    Long, Limin
    Yang, Jinfu
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 2325 - 2330
  • [49] ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM
    Campos, Carlos
    Elvira, Richard
    Gomez Rodriguez, Juan J.
    Montiel, Jose M. M.
    Tardos, Juan D.
    IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) : 1874 - 1890
  • [50] Evaluation of RGB and LiDAR Combination for Robust Place Recognition
    Alijani, Farid
    Peltomaki, Jukka
    Puura, Jussi
    Huttunen, Heikki
    Kamarainen, Joni-Kristian
    Rahtu, Esa
    PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2022, : 650 - 658