Deep Reinforcement Learning Based Online Area Covering Autonomous Robot

被引:3
|
作者
Saha, Olimpiya [1 ]
Ren, Guohua [1 ]
Heydari, Javad [1 ]
Ganapathy, Viswanath [1 ]
Shah, Mohak [1 ]
机构
[1] LG Amer Res Lab, Adv AI Team, Santa Clara, CA USA
关键词
Autonomous Agents; Motion and Path Planning; Deep Learning Methods; COVERAGE;
D O I
10.1109/ICARA51699.2021.9376477
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Autonomous area covering robots have been increasingly adopted in for diverse applications. In this paper, we investigate the effectiveness of deep reinforcement learning (RL) algorithms for online area coverage while minimizing the overlap. Through simulation experiments in grid based environments and in the Gazebo simulator, we show that Deep Q-Network (DQN) based algorithms efficiently cover unknown indoor environments. Furthermore, through empirical evaluations and theoretical analysis, we demonstrate that DQN with prioritized experience replay (DQN-PER) significantly minimizes the sample complexity while achieving reduced overlap when compared with other DQN variants. In addition, through simulations we demonstrate the performance advantage of DQN-PER over the state-of-the-art area coverage algorithms, BA* and BSA. Our experiments also indicate that a pre-trained RL agent can efficiently cover new unseen environments with minimal additional sample complexity. Finally, we propose a novel way of formulating the state representation to arrive at an area-agnostic RL agent for efficiently covering unknown environments.
引用
收藏
页码:21 / 25
页数:5
相关论文
共 50 条
  • [1] Robot autonomous grasping and assembly skill learning based on deep reinforcement learning
    Chen, Chengjun
    Zhang, Hao
    Pan, Yong
    Li, Dongnian
    [J]. INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2024, 130 (11-12): : 5233 - 5249
  • [2] Robot autonomous grasping and assembly skill learning based on deep reinforcement learning
    Chengjun Chen
    Hao Zhang
    Yong Pan
    Dongnian Li
    [J]. The International Journal of Advanced Manufacturing Technology, 2024, 130 : 5233 - 5249
  • [3] Autonomous Navigation by Mobile Robot with Sensor Fusion Based on Deep Reinforcement Learning
    Ou, Yang
    Cai, Yiyi
    Sun, Youming
    Qin, Tuanfa
    [J]. SENSORS, 2024, 24 (12)
  • [4] Deep Reinforcement Learning Based on the Hindsight Experience Replay for Autonomous Driving of Mobile Robot
    Park, Minjae
    Hong, Jin Seok
    Kwon, Nam Kyu
    [J]. Journal of Institute of Control, Robotics and Systems, 2022, 28 (11) : 1006 - 1012
  • [5] Self-Learning Robot Autonomous Navigation with Deep Reinforcement Learning Techniques
    Pintos Gomez de las Heras, Borja
    Martinez-Tomas, Rafael
    Cuadra Troncoso, Jose Manuel
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (01):
  • [6] Experimental Research on Deep Reinforcement Learning in Autonomous navigation of Mobile Robot
    Yue, Pengyu
    Xin, Jing
    Zhao, Huan
    Liu, Ding
    Shan, Mao
    Zhang, Jian
    [J]. PROCEEDINGS OF THE 2019 14TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2019), 2019, : 1612 - 1616
  • [7] Autonomous Mobile Robot with Simple Navigation System Based on Deep Reinforcement Learning and a Monocular Camera
    Yokoyama, Koki
    Morioka, Kazuyuki
    [J]. 2020 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2020, : 525 - 530
  • [8] Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning
    Ejaz, Muhammad Mudassir
    Tang, Tong Boon
    Lu, Cheng-Kai
    [J]. IEEE SENSORS JOURNAL, 2021, 21 (02) : 2230 - 2240
  • [9] Air Learning: a deep reinforcement learning gym for autonomous aerial robot visual navigation
    Srivatsan Krishnan
    Behzad Boroujerdian
    William Fu
    Aleksandra Faust
    Vijay Janapa Reddi
    [J]. Machine Learning, 2021, 110 : 2501 - 2540
  • [10] Air Learning: a deep reinforcement learning gym for autonomous aerial robot visual navigation
    Krishnan, Srivatsan
    Boroujerdian, Behzad
    Fu, William
    Faust, Aleksandra
    Reddi, Vijay Janapa
    [J]. MACHINE LEARNING, 2021, 110 (09) : 2501 - 2540