Detection-first tightly-coupled LiDAR-Visual-Inertial SLAM in dynamic environments

被引:0
|
作者
Xu, Xiaobin [1 ,2 ]
Hu, Jinchao [1 ,2 ]
Zhang, Lei [1 ,2 ]
Cao, Chenfei [1 ,2 ]
Yang, Jian [3 ]
Ran, Yingying [1 ,2 ]
Tan, Zhiying [1 ,2 ]
Xu, Linsen [1 ,2 ]
Luo, Minzhou [1 ,2 ]
机构
[1] Hohai Univ, Coll Mech & Elect Engn, Changzhou 213200, Peoples R China
[2] Hohai Univ, Jiangsu Key Lab Special Robot Technol, Changzhou 213200, Peoples R China
[3] Yangzhou Univ, Coll Mech Engn, Yangzhou 225127, Peoples R China
基金
中国博士后科学基金;
关键词
Dynamic environments; SLAM; Multi-sensor fusion; Detection and tracking; RGB-D SLAM; MOTION REMOVAL; ODOMETRY;
D O I
10.1016/j.measurement.2024.115506
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
To address the challenges posed by the dynamic environment for Simultaneous Localization and Mapping (SLAM), a detection-first tightly-coupled LiDAR-Visual-Inertial SLAM incorporating lidar, camera, and inertial measurement unit (IMU) is proposed. Firstly, the point cloud clustering with semantic labels are obtained by fusing image and point cloud information. Then, a tracking algorithm is applied to obtain the information of the motion state of the targets. Afterwards, the tracked dynamic targets are utilized to eliminate extraneous feature points. Finally, a factor graph is used to jointly optimize the IMU pre-integration, and tightly couple the laser odometry and visual odometry within the system. To validate the performance of the proposed SLAM framework, both public datasets (KITTI and UrbanNav) and actual scene data are tested. The experimental results show that compared with LeGO-LOAM, LIO-SAM and LVI-SAM for public dataset, the root mean squared error (RMSE) of proposed algorithm is decreased by 44.56 % (4.47 m) and 4.15 % (4.62 m) in high dynamic scenes and normal scenes, respectively. Through actual scene data, the proposed algorithm mitigates the impact of dynamic objects on map building directly.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry
    Wisth, David
    Camurri, Marco
    Das, Sandipan
    Fallon, Maurice
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 1004 - 1011
  • [22] Semantic-Assisted LIDAR Tightly Coupled SLAM for Dynamic Environments
    Liu, Peng
    Bi, Yuxuan
    Shi, Jialin
    Zhang, Tianyi
    Wang, Caixia
    IEEE ACCESS, 2024, 12 : 34042 - 34053
  • [23] PLI-SLAM: A Tightly-Coupled Stereo Visual-Inertial SLAM System with Point and Line Features
    Teng, Zhaoyu
    Han, Bin
    Cao, Jie
    Hao, Qun
    Tang, Xin
    Li, Zhaoyang
    REMOTE SENSING, 2023, 15 (19)
  • [24] RLI-SLAM: Fast Robust Ranging-LiDAR-Inertial Tightly-Coupled Localization and Mapping
    Xin, Rui
    Guo, Ningyan
    Ma, Xingyu
    Liu, Gang
    Feng, Zhiyong
    SENSORS, 2024, 24 (17)
  • [25] A 3D LiDAR-Inertial Tightly-Coupled SLAM for Mobile Robots on Indoor Environment
    Li, Sen
    He, Rui
    Guan, He
    Shen, Yuanrui
    Ma, Xiaofei
    Liu, Hezhao
    IEEE ACCESS, 2024, 12 : 29596 - 29606
  • [26] LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme
    Liu, Zhenbin
    Li, Zengke
    Liu, Ao
    Shao, Kefan
    Guo, Qiang
    Wang, Chuanhao
    REMOTE SENSING, 2024, 16 (09)
  • [27] InLIOM: Tightly-Coupled Intensity LiDAR Inertial Odometry and Mapping
    Wang, Hanqi
    Liang, Huawei
    Li, Zhiyuan
    Zheng, Xiaokun
    Xu, Haitao
    Zhou, Pengfei
    Kong, Bin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (09) : 11821 - 11832
  • [28] GVIL: Tightly-Coupled GNSS PPP/Visual/INS/LiDAR SLAM Based on Graph Optimization
    Liao J.
    Li X.
    Feng S.
    Wuhan Daxue Xuebao (Xinxi Kexue Ban)/Geomatics and Information Science of Wuhan University, 2023, 48 (07): : 1204 - 1215
  • [29] A tightly-coupled LIDAR-IMU SLAM method for quadruped robots
    Zhou, Zhifeng
    Zhang, Chunyan
    Li, Chenchen
    Zhang, Yi
    Shi, Yun
    Zhang, Wei
    MEASUREMENT & CONTROL, 2024, 57 (07): : 1004 - 1013
  • [30] Dynamic reconfiguration in tightly-coupled conference environments
    Trossen, D
    Kliem, P
    MULTIMEDIA SYSTEMS AND APPLICATIONS II, 1999, 3845 : 391 - 402