DFD-SLAM: Visual SLAM with Deep Features in Dynamic Environment

被引:1
|
作者
Qian, Wei [1 ]
Peng, Jiansheng [1 ,2 ,3 ,4 ]
Zhang, Hongyu [1 ]
机构
[1] Guangxi Univ Sci & Technol, Coll Automat, Liuzhou 545000, Peoples R China
[2] Hechi Univ, Dept Artificial Intelligence & Mfg, Hechi 547000, Peoples R China
[3] Hechi Univ, Educ Dept Guangxi Zhuang Autonomous Reg, Key Lab AI & Informat Proc, Hechi 547000, Peoples R China
[4] Hechi Univ, Sch Chem & Bioengn, Guangxi Key Lab Sericulture Ecol & Appl Intelligen, Hechi 546300, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 11期
基金
中国国家自然科学基金;
关键词
visual SLAM; deep features; dynamic SLAM; YOLOv8; HFNet; VERSATILE;
D O I
10.3390/app14114949
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Visual SLAM technology is one of the important technologies for mobile robots. Existing feature-based visual SLAM techniques suffer from tracking and loop closure performance degradation in complex environments. We propose the DFD-SLAM system to ensure outstanding accuracy and robustness across diverse environments. Initially, building on the ORB-SLAM3 system, we replace the original feature extraction component with the HFNet network and introduce a frame rotation estimation method. This method determines the rotation angles between consecutive frames to select superior local descriptors. Furthermore, we utilize CNN-extracted global descriptors to replace the bag-of-words approach. Subsequently, we develop a precise removal strategy, combining semantic information from YOLOv8 to accurately eliminate dynamic feature points. In the TUM-VI dataset, DFD-SLAM shows an improvement over ORB-SLAM3 of 29.24% in the corridor sequences, 40.07% in the magistrale sequences, 28.75% in the room sequences, and 35.26% in the slides sequences. In the TUM-RGBD dataset, DFD-SLAM demonstrates a 91.57% improvement over ORB-SLAM3 in highly dynamic scenarios. This demonstrates the effectiveness of our approach.
引用
收藏
页数:21
相关论文
共 50 条
  • [41] A Review of Visual SLAM for Dynamic Objects
    Zhao, Lina
    Wei, Baoguo
    Li, Lixin
    Li, Xu
    2022 IEEE 17TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2022, : 1080 - 1085
  • [42] SamSLAM: A Visual SLAM Based on Segment Anything Model for Dynamic Environment
    Chen, Xianhao
    Wang, Tengyue
    Mai, Haonan
    Yang, Liangjing
    2024 8TH INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION, ICRCA 2024, 2024, : 91 - 97
  • [43] Robust Visual SLAM in Dynamic Environment Based on Motion Detection and Segmentation
    Yu, Xin
    Shen, Rulin
    Wu, Kang
    Lin, Zhi
    Journal of Autonomous Vehicles and Systems, 2024, 4 (01):
  • [44] A review of visual SLAM with dynamic objects
    Qin, Yong
    Yu, Haidong
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2023, 50 (06): : 1000 - 1010
  • [45] Dynamic Visual SLAM Integrated with IMU for
    Peng, Zhongcui
    Cheng, Shaowu
    Li, Xiantong
    Li, Kui
    Cai, Ming
    You, Linlin
    2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 4247 - 4252
  • [46] Deep learning-based visual slam for indoor dynamic scenes
    Xu, Zhendong
    Song, Yong
    Pang, Bao
    Xu, Qingyang
    Yuan, Xianfeng
    APPLIED INTELLIGENCE, 2025, 55 (06)
  • [47] DFS-SLAM: A Visual SLAM Algorithm for Deep Fusion of Semantic Information
    Jiao, Songming
    Li, Yan
    Shan, Zhengwen
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (12): : 11794 - 11801
  • [48] TwistSLAM: Constrained SLAM in Dynamic Environment
    Gonzalez, Mathieu
    Marchand, Eric
    Kacete, Amine
    Royan, Jerome
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) : 6846 - 6853
  • [49] TDO-SLAM: Traffic Sign and Dynamic Object Based Visual SLAM
    Park, Soon-Yong
    Lee, Junesuk
    IEEE ACCESS, 2024, 12 : 24569 - 24582
  • [50] ESD-SLAM: An efficient semantic visual SLAM towards dynamic environments
    Xu, Yan
    Wang, Yanyun
    Huang, Jiani
    Qin, Hong
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2022, 42 (06) : 5155 - 5164