A Comparison Study between Traditional and Deep-Reinforcement-Learning-Based Algorithms for Indoor Autonomous Navigation in Dynamic Scenarios

被引:1
|
作者
Arce, Diego [1 ]
Solano, Jans [1 ]
Beltran, Cesar [1 ]
机构
[1] Pontificia Univ Catolica Peru, Engn Dept, Lima 15088, Peru
关键词
comparison study; indoor autonomous navigation; mobile robots; dynamic scenarios; traditional navigation; DRL-based navigation;
D O I
10.3390/s23249672
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
At the beginning of a project or research that involves the issue of autonomous navigation of mobile robots, a decision must be made about working with traditional control algorithms or algorithms based on artificial intelligence. This decision is not usually easy, as the computational capacity of the robot, the availability of information through its sensory systems and the characteristics of the environment must be taken into consideration. For this reason, this work focuses on a review of different autonomous-navigation algorithms applied to mobile robots, from which the most suitable ones have been identified for the cases in which the robot must navigate in dynamic environments. Based on the identified algorithms, a comparison of these traditional and DRL-based algorithms was made, using a robotic platform to evaluate their performance, identify their advantages and disadvantages and provide a recommendation for their use, according to the development requirements of the robot. The algorithms selected were DWA, TEB, CADRL and SAC, and the results show that-according to the application and the robot's characteristics-it is recommended to use each of them, based on different conditions.
引用
收藏
页数:30
相关论文
共 50 条
  • [1] Deep-Reinforcement-Learning-Based Autonomous UAV Navigation With Sparse Rewards
    Wang, Chao
    Wang, Jian
    Wang, Jingjing
    Zhang, Xudong
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 6180 - 6190
  • [2] Holistic Deep-Reinforcement-Learning-based Training for Autonomous Navigation in Crowded Environments
    Kaestner, Linh
    Meusel, Marvin
    Bhuiyan, Teham
    Lambrecht, Jens
    [J]. 2023 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, AIM, 2023, : 1302 - 1308
  • [3] Deep-reinforcement-learning-based UAV autonomous navigation and collision avoidance in unknown environments
    Fei WANG
    Xiaoping ZHU
    Zhou ZHOU
    Yang TANG
    [J]. Chinese Journal of Aeronautics, 2024, 37 (03) : 237 - 257
  • [4] Deep-Reinforcement-Learning-Based Semantic Navigation of Mobile Robots in Dynamic Environments
    Kaestner, Linh
    Marx, Cornelius
    Lambrecht, Jens
    [J]. 2020 IEEE 16TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2020, : 1110 - 1115
  • [5] Deep-reinforcement-learning-based UAV autonomous navigation and collision avoidance in unknown environments
    Wang, Fei
    Zhu, Xiaoping
    Zhou, Zhou
    Tang, Yang
    [J]. CHINESE JOURNAL OF AERONAUTICS, 2024, 37 (03) : 237 - 257
  • [6] Deep-Reinforcement-Learning-Based Autonomous Establishment of Local Positioning Systems in Unknown Indoor Environments
    Wu, Zhen
    Yao, Zheng
    Lu, Mingquan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (15) : 13626 - 13637
  • [7] Static and Dynamic Collision Avoidance for Autonomous Robot Navigation in Diverse Scenarios based on Deep Reinforcement Learning
    Pico, Nabih
    Lee, Beomjoon
    Montero, Estrella
    Tadese, Meseret
    Auh, Eugene
    Doh, Myeongyun
    Moon, Hyungpil
    [J]. 2023 20TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS, UR, 2023, : 281 - 286
  • [8] Evaluation of a Deep-Reinforcement-Learning-based Controller for the Control of an Autonomous Underwater Vehicle
    Sola, Yoann
    Chaffre, Thomas
    le Chenadec, Gilles
    Sammut, Karl
    Clement, Benoit
    [J]. GLOBAL OCEANS 2020: SINGAPORE - U.S. GULF COAST, 2020,
  • [9] Deep-Reinforcement-Learning-Based Autonomous Voltage Control for Power Grid Operations
    Duan, Jiajun
    Shi, Di
    Diao, Ruisheng
    Li, Haifeng
    Wang, Zhiwei
    Zhang, Bei
    Bian, Desong
    Yi, Zhehan
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2020, 35 (01) : 814 - 817
  • [10] Deep-Reinforcement-Learning-Based Dynamic Ensemble Model for Stock Prediction
    Lin, Wenjing
    Xie, Liang
    Xu, Haijiao
    [J]. ELECTRONICS, 2023, 12 (21)