AFO-SLAM: an improved visual SLAM in dynamic scenes using acceleration of feature extraction and object detection

被引:0
|
作者
Wei, Jinbi [1 ]
Deng, Heng [1 ,2 ]
Wang, Jihong [1 ]
Zhang, Liguo [1 ,2 ]
机构
[1] Beijing Univ Technol, Sch Informat Sci & Technol, Beijing 100124, Peoples R China
[2] Minist Educ, Engn Res Ctr Intelligence Percept & Autonomous Con, Beijing 100124, Peoples R China
基金
中国国家自然科学基金;
关键词
dynamic environments; object detection; depth information; CUDA; visual simultaneous localization and mapping (SLAM); ENVIRONMENTS;
D O I
10.1088/1361-6501/ad6627
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
In visual simultaneous localization and mapping (SLAM) systems, traditional methods often excel due to rigid environmental assumptions, but face challenges in dynamic environments. To address this, learning-based approaches have been introduced, but their expensive computing costs hinder real-time performance, especially on embedded mobile platforms. In this article, we propose a robust and real-time visual SLAM method towards dynamic environments using acceleration of feature extraction and object detection (AFO-SLAM). First, AFO-SLAM employs an independent object detection thread that utilizes YOLOv5 to extract semantic information and identify the bounding boxes of moving objects. To preserve the background points within these boxes, depth information is utilized to segment target foreground and background with only a single frame, with the points of the foreground area considered as dynamic points and then rejected. To optimize performance, CUDA program accelerates feature extraction preceding point removal. Finally, extensive evaluations are performed on both TUM RGB-D dataset and real scenes using a low-power embedded platform. Experimental results demonstrate that AFO-SLAM offers a balance between accuracy and real-time performance on embedded platforms, and enables the generation of dense point cloud maps in dynamic scenarios.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] AHY-SLAM: Toward Faster and More Accurate Visual SLAM in Dynamic Scenes Using Homogenized Feature Extraction and Object Detection Method
    Gong, Han
    Gong, Lei
    Ma, Tianbing
    Sun, Zhicheng
    Li, Liang
    SENSORS, 2023, 23 (09)
  • [2] Object Detection-based Visual SLAM for Dynamic Scenes
    Zhao, Xinhua
    Ye, Lei
    PROCEEDINGS OF 2022 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2022), 2022, : 1153 - 1158
  • [3] Fusing Semantic Segmentation and Object Detection for Visual SLAM in Dynamic Scenes
    Yu, Peilin
    Guo, Chi
    Liu, Yang
    Zhang, Huyin
    PROCEEDINGS OF 27TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY, VRST 2021, 2021,
  • [4] Optimized feature extraction and object detection for indoor dynamic environment visual SLAM study
    Wang, Wencheng
    Wang, Yingchao
    Wu, Zhenmin
    International Journal of Advanced Robotic Systems, 2024, 21 (05)
  • [5] Dynamic SLAM: A Visual SLAM in Outdoor Dynamic Scenes
    Wen, Shuhuan
    Li, Xiongfei
    Liu, Xin
    Li, Jiaqi
    Tao, Sheng
    Long, Yidan
    Qiu, Tony
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [6] Visual Slam in Dynamic Scenes Based on Object Tracking and Static Points Detection
    Li, Gui-Hai
    Chen, Song-Lin
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 104 (02)
  • [7] Improved Visual SLAM Algorithm Based on Dynamic Scenes
    Niu, Jinxing
    Chen, Ziqi
    Zhang, Tao
    Zheng, Shiyu
    Applied Sciences (Switzerland), 2024, 14 (22):
  • [8] Visual Slam in Dynamic Scenes Based on Object Tracking and Static Points Detection
    Gui-Hai Li
    Song-Lin Chen
    Journal of Intelligent & Robotic Systems, 2022, 104
  • [9] Visual SLAM Based on Dynamic Object Detection
    Chen, Bocheng
    Peng, Gang
    He, Dingxin
    Zhou, Cheng
    Hu, Bin
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 5966 - 5971
  • [10] An Improved Visual SLAM Method with Adaptive Feature Extraction
    Guo, Xinxin
    Lyu, Mengyan
    Xia, Bin
    Zhang, Kunpeng
    Zhang, Liye
    APPLIED SCIENCES-BASEL, 2023, 13 (18):