Accurate RGB-D SLAM in dynamic environments based on dynamic visual feature removal

被引:0
|
作者
Chenxin Liu
Jiahu Qin
Shuai Wang
Lei Yu
Yaonan Wang
机构
[1] University of Science and Technology of China,Department of Automation
[2] Hefei Comprehensive National Science Center,Institute of Artificial Intelligence
[3] Hunan University,College of Electrical and Information Engineering
[4] Hunan University,National Engineering Research Center of Robot Visual Perception and Control Technology
来源
关键词
SLAM; dynamic environments; indoor localization; graph-cut; robot navigation;
D O I
暂无
中图分类号
学科分类号
摘要
Visual localization is considered an essential capability in robotics and has attracted increasing interest for the past few years. However, most proposed visual localization systems assume that the surrounding environment is static, which is difficult to maintain in real-world scenarios due to the presence of moving objects. In this paper, we present DFR-SLAM, a real-time and accurate RGB-D SLAM based on ORB-SLAM2 that achieves satisfactory performance in a variety of challenging dynamic scenarios. At the core of our system lies a motion consensus filtering algorithm estimating the initial camera pose and a graph-cut optimization framework combining long-term observations, prior information, and spatial coherence to jointly distinguish dynamic and static visual features. Other systems for dynamic environments detect dynamic components by using the information from short time-span frames, whereas our system uses observations from a long period of keyframes. We evaluate our system using dynamic sequences from the public TUM dataset, and the evaluation demonstrates that the proposed system outperforms the original ORB-SLAM2 system significantly. In addition, our system provides competitive localization accuracy with satisfactory real-time performance compared to closely related SLAM systems designed to adapt to dynamic environments.
引用
收藏
相关论文
共 50 条
  • [21] Improving RGB-D SLAM accuracy in dynamic environments based on semantic and geometric constraints
    Wang, Xiqi
    Zheng, Shunyi
    Lin, Xiaohu
    Zhu, Fengbo
    MEASUREMENT, 2023, 217
  • [22] RGB-D SLAM in indoor dynamic environments with two channels based on scenario classification
    Zhou, Yao
    Tao, Fazhan
    Fu, Zhumu
    Chen, Qihong
    Zhu, Longlong
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2023, 34 (11)
  • [23] Multi-Mask Fusion-Based RGB-D SLAM in Dynamic Environments
    Gao Y.
    Hu M.
    Chen B.
    Yang W.
    Wang J.
    Wang J.
    IEEE Sensors Journal, 2024, 24 (21) : 1 - 1
  • [24] GMSK-SLAM: a new RGB-D SLAM method with dynamic areas detection towards dynamic environments
    Wei, Hongyu
    Zhang, Tao
    Zhang, Liang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (21-23) : 31729 - 31751
  • [25] GMSK-SLAM: a new RGB-D SLAM method with dynamic areas detection towards dynamic environments
    Hongyu Wei
    Tao Zhang
    Liang Zhang
    Multimedia Tools and Applications, 2021, 80 : 31729 - 31751
  • [26] RGB-D Visual SLAM Algorithm Using Scene Flow and Conditional Random Field in Dynamic Environments
    Jeon, Hyeongjun
    Han, Changwan
    You, Donggil
    Oh, Junghyun
    2022 22ND INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2022), 2022, : 129 - 134
  • [27] Feature Regions Segmentation based RGB-D Visual Odometry in Dynamic Environment
    Zhang, Yu
    Dai, Weichen
    Peng, Zhen
    Li, Ping
    Fang, Zheng
    IECON 2018 - 44TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2018, : 5648 - 5655
  • [28] A robust visual odometry based on RGB-D camera in dynamic indoor environments
    Zhang, Fangfang
    Li, Qiyan
    Wang, Tingting
    Ma, Tianlei
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2021, 32 (04)
  • [29] A Method for Reconstructing Background from RGB-D SLAM in Indoor Dynamic Environments
    Lu, Quan
    Pan, Ying
    Hu, Likun
    He, Jiasheng
    SENSORS, 2023, 23 (07)
  • [30] Semi-direct RGB-D SLAM Algorithm for Dynamic Indoor Environments
    Gao C.
    Zhang Y.
    Wang X.
    Deng Y.
    Jiang H.
    Jiqiren/Robot, 2019, 41 (03): : 372 - 383