Multimodal Features and Accurate Place Recognition With Robust Optimization for Lidar-Visual-Inertial SLAM

被引:3
|
作者
Zhao, Xiongwei [1 ]
Wen, Congcong [2 ]
Manoj Prakhya, Sai [3 ]
Yin, Hongpei [4 ]
Zhou, Rundong [5 ]
Sun, Yijiao [1 ]
Xu, Jie [6 ]
Bai, Haojie [1 ]
Wang, Yang [1 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Elect & Informat Engn, Shenzhen 518071, Peoples R China
[2] NYU, Tandon Sch Engn, New York, NY 10012 USA
[3] Huawei Munich Res Ctr, D-80992 Munich, Germany
[4] Guangdong Inst Artificial Intelligence & Adv Comp, Guangzhou 510535, Peoples R China
[5] Harbin Inst Technol, Sch Elect & Informat Engn, Harbin 150001, Peoples R China
[6] Harbin Inst Technol, Sch Mech & Elect Engn, Harbin 150001, Peoples R China
关键词
Laser radar; Simultaneous localization and mapping; Visualization; Feature extraction; Optimization; Robot sensing systems; Three-dimensional displays; 3-D lidar loop closure descriptor; lidar-visual-inertial simultaneous localization and mapping (SLAM) (LVINS); robust iterative optimization; state estimation; two-stage loop detection; LINE SEGMENT DETECTOR; REAL-TIME; DESCRIPTOR;
D O I
10.1109/TIM.2024.3370762
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Lidar-visual-inertial simultaneous localization and mapping (SLAM) (LVINS) provides a compelling solution for accurate and robust state estimation and mapping, integrating complementary information from the multisensor data. However, in the front-end processing of existing LVINS systems, methods based on the visual line feature matching typically suffer from low accuracy and are time consuming. In addition, the back-end optimization of current multisensor fusion SLAM systems is adversely affected by feature association outliers, which constrains further enhancements in localization precision. In the loop closure process, the existing lidar loop closure descriptors, relying primarily on 2-D information from point clouds, often fall short in complex environments. To effectively tackle these challenges, we introduce the multimodal feature-based LVINS framework, abbreviated as MMF-LVINS. Our framework consists of three major innovations. First, we propose a novel coarse-to-fine (CTF) visual line matching method that utilizes geometric descriptor similarity and optical flow verification, substantially improving both efficiency and accuracy of line feature matching. Second, we present a robust iterative optimization approach featuring a newly proposed adaptive loss function. This function is tailored based on the quality of feature association and incorporates graduated nonconvexity, thereby reducing the impact of outliers on system accuracy. Third, to augment the precision of lidar-based loop closure detection, we introduce an innovative 3-D lidar descriptor that captures spatial, height, and intensity information from the point cloud. We also propose a two-stage place recognition module that synergistically combines both visual and this new lidar descriptor, significantly diminishing cumulative drift. Extensive experimental evaluations on six real-world datasets, including EuRoc, KITTI, NCLT, M2DGR, UrbanNav, and UrbanLoco, demonstrate that our MMF-LVINS system achieves superior state estimation accuracy compared with the existing state-of-the-art methods. These experiments also validate the effectiveness of our advanced techniques in visual line matching, robust iterative optimization, and enhanced lidar loop closure detection.
引用
收藏
页码:1 / 1
页数:16
相关论文
共 50 条
  • [31] A Robust Visual-Inertial SLAM in Complex Indoor Environments
    Zhong, Min
    You, Yinghui
    Zhou, Shuai
    Xu, Xiaosu
    IEEE SENSORS JOURNAL, 2023, 23 (17) : 19986 - 19994
  • [32] A Fast and Robust Place Recognition Approach for Stereo Visual Odometry Using LiDAR Descriptors
    Mo, Jiawei
    Sattar, Junaed
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 5893 - 5900
  • [33] InertialNet: Toward Robust SLAM via Visual Inertial Measurement
    Liu, Tse-An
    Lin, Huei-Yung
    Lin, Wei-Yang
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 1311 - 1316
  • [34] Environmental-structure-perception-based adaptive pose fusion method for LiDAR-visual-inertial odometry
    Zhao, Zixu
    Liu, Chang
    Yu, Wenyao
    Shi, Jinglin
    Zhang, Dalin
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2024, 21 (03)
  • [35] LiDAR Inertial SLAM Algorithm Based on IESKF with Factor Graph Optimization
    Wei Zhifei
    Fan Shaosheng
    Xiong Mingxuan
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (14)
  • [36] A Visual Inertial SLAM Method for Fusing Point and Line Features
    Xiao, Yunfei
    Ma, Huajun
    Duan, Shukai
    Wang, Lidan
    ADVANCES IN NEURAL NETWORKS-ISNN 2024, 2024, 14827 : 268 - 277
  • [37] Accurate Visual-Inertial SLAM by Feature Re-identification
    Peng, Xiongfeng
    Liu, Zhihua
    Wang, Qiang
    Kim, Yun-Tae
    Jeon, Myungjae
    Lee, Hong-Seok
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 9168 - 9175
  • [38] Efficient and Accurate Tightly-Coupled Visual-Lidar SLAM
    Chou, Chih-Chung
    Chou, Cheng-Fu
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) : 14509 - 14523
  • [39] Highly Robust Visual Place Recognition Through Spatial Matching of CNN Features
    Camara, Luis G.
    Gaebert, Carl
    Preucil, Libor
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 3748 - 3755
  • [40] Visual-LiDAR-Inertial Odometry: A New Visual-Inertial SLAM Method based on an iPhone 12 Pro
    Jin, Lingqiu
    Ye, Cang
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1511 - 1516