Detection-first tightly-coupled LiDAR-Visual-Inertial SLAM in dynamic environments

被引:0
|
作者
Xu, Xiaobin [1 ,2 ]
Hu, Jinchao [1 ,2 ]
Zhang, Lei [1 ,2 ]
Cao, Chenfei [1 ,2 ]
Yang, Jian [3 ]
Ran, Yingying [1 ,2 ]
Tan, Zhiying [1 ,2 ]
Xu, Linsen [1 ,2 ]
Luo, Minzhou [1 ,2 ]
机构
[1] Hohai Univ, Coll Mech & Elect Engn, Changzhou 213200, Peoples R China
[2] Hohai Univ, Jiangsu Key Lab Special Robot Technol, Changzhou 213200, Peoples R China
[3] Yangzhou Univ, Coll Mech Engn, Yangzhou 225127, Peoples R China
基金
中国博士后科学基金;
关键词
Dynamic environments; SLAM; Multi-sensor fusion; Detection and tracking; RGB-D SLAM; MOTION REMOVAL; ODOMETRY;
D O I
10.1016/j.measurement.2024.115506
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
To address the challenges posed by the dynamic environment for Simultaneous Localization and Mapping (SLAM), a detection-first tightly-coupled LiDAR-Visual-Inertial SLAM incorporating lidar, camera, and inertial measurement unit (IMU) is proposed. Firstly, the point cloud clustering with semantic labels are obtained by fusing image and point cloud information. Then, a tracking algorithm is applied to obtain the information of the motion state of the targets. Afterwards, the tracked dynamic targets are utilized to eliminate extraneous feature points. Finally, a factor graph is used to jointly optimize the IMU pre-integration, and tightly couple the laser odometry and visual odometry within the system. To validate the performance of the proposed SLAM framework, both public datasets (KITTI and UrbanNav) and actual scene data are tested. The experimental results show that compared with LeGO-LOAM, LIO-SAM and LVI-SAM for public dataset, the root mean squared error (RMSE) of proposed algorithm is decreased by 44.56 % (4.47 m) and 4.15 % (4.62 m) in high dynamic scenes and normal scenes, respectively. Through actual scene data, the proposed algorithm mitigates the impact of dynamic objects on map building directly.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] LVIO-Fusion:Tightly-Coupled LiDAR-Visual-Inertial Odometry and Mapping in Degenerate Environments
    Zhang, Hongkai
    Du, Liang
    Bao, Sheng
    Yuan, Jianjun
    Ma, Shugen
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (04) : 3783 - 3790
  • [2] IMU Augment Tightly Coupled Lidar-Visual-Inertial Odometry for Agricultural Environments
    Hoang, Quoc Hung
    Kim, Gon-Woo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (10): : 8483 - 8490
  • [3] LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
    Shan, Tixiao
    Englot, Brendan
    Ratti, Carlo
    Rus, Daniela
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 5692 - 5698
  • [4] Tightly-coupled stereo visual-inertial-LiDAR SLAM based on graph optimization
    Wang X.
    Li X.
    Liao J.
    Feng S.
    Li S.
    Zhou Y.
    Cehui Xuebao/Acta Geodaetica et Cartographica Sinica, 2022, 51 (08): : 1744 - 1756
  • [5] Development of tightly coupled based lidar-visual-inertial odometry
    Kim K.-W.
    Jung T.-K.
    Seo S.-H.
    Jee G.-I.
    Journal of Institute of Control, Robotics and Systems, 2020, 26 (08) : 597 - 603
  • [6] Efficient and Accurate Tightly-Coupled Visual-Lidar SLAM
    Chou, Chih-Chung
    Chou, Cheng-Fu
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) : 14509 - 14523
  • [7] DY-LIO: Tightly-coupled LiDAR-Inertial Odometry for dynamic environments
    Zou J.
    Chen H.
    Shao L.
    Bao H.
    Tang H.
    Xiang J.
    Liu J.
    IEEE Sensors Journal, 2024, 24 (21) : 1 - 1
  • [8] RF-LIO: Removal-First Tightly-coupled Lidar Inertial Odometry in High Dynamic Environments
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 4421 - 4428
  • [9] FT-LVIO: Fully Tightly coupled LiDAR-Visual-Inertial odometry
    Zhang, Zhuo
    Yao, Zheng
    Lu, Mingquan
    IET RADAR SONAR AND NAVIGATION, 2023, 17 (05): : 759 - 771
  • [10] Sensor Synchronization for Android Phone Tightly-Coupled Visual-Inertial SLAM
    Feng, Zheyu
    Li, Jianwen
    Dai, Taogao
    CHINA SATELLITE NAVIGATION CONFERENCE (CSNC) 2018 PROCEEDINGS, VOL III, 2018, 499 : 601 - 612