Deep Reinforcement Learning Supervised Autonomous Exploration in Office Environments

被引:0
|
作者
Zhu, Delong [1 ]
Li, Tingguang [1 ]
Ho, Danny [1 ]
Wang, Chaoqun [1 ]
Meng, Max Q. -H. [1 ]
机构
[1] Chinese Univ Hong Kong, Dept Elect Engn, Shatin, Hong Kong, Peoples R China
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Exploration region selection is an essential decision making process in autonomous robot exploration task. While a majority of greedy methods are proposed to deal with this problem, few efforts are made to investigate the importance of predicting long-term planning. In this paper, we present an algorithm that utilizes deep reinforcement learning (DRL) to learn exploration knowledge over office blueprints, which enables the agent to predict a long-term visiting order for unexplored subregions. On the basis of this algorithm, we propose an exploration architecture that integrates a DRL model, a nextbest- view (NBV) selection approach and a structural integrity measurement to further improve the exploration performance. At the end of this paper, we evaluate the proposed architecture against other methods on several new office maps, showing that the agent can efficiently explore uncertain regions with a shorter path and smarter behaviors.
引用
收藏
页码:7548 / 7555
页数:8
相关论文
共 50 条
  • [1] Autonomous exploration through deep reinforcement learning
    Yan, Xiangda
    Huang, Jie
    He, Keyan
    Hong, Huajie
    Xu, Dasheng
    [J]. INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2023, 50 (05): : 793 - 803
  • [2] Deep Reinforcement Learning with Noisy Exploration for Autonomous Driving
    Li, Ruyang
    Zhang, Yaqiang
    Zhao, Yaqian
    Wei, Hui
    Xu, Zhe
    Zhao, Kun
    [J]. PROCEEDINGS OF 2022 THE 6TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, ICMLSC 20222, 2022, : 8 - 14
  • [3] Exploration of Unknown Environments Using Deep Reinforcement Learning
    McCalmon, Joseph
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 15970 - 15971
  • [4] Deep Reinforcement Learning for Autonomous Drone Navigation in Cluttered Environments
    Solaimalai, Gautam
    Prakash, Kode Jaya
    Kumar, Sampath S.
    Bhagyalakshmi, A.
    Siddharthan, P.
    Kumar, Senthil K. R.
    [J]. 2024 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATION AND APPLIED INFORMATICS, ACCAI 2024, 2024,
  • [5] Variational Dynamic for Self-Supervised Exploration in Deep Reinforcement Learning
    Bai, Chenjia
    Liu, Peng
    Liu, Kaiyu
    Wang, Lingxiao
    Zhao, Yingnan
    Han, Lei
    Wang, Zhaoran
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4776 - 4790
  • [6] Voronoi-Based Multi-Robot Autonomous Exploration in Unknown Environments via Deep Reinforcement Learning
    Hu, Junyan
    Niu, Hanlin
    Carrasco, Joaquin
    Lennox, Barry
    Arvin, Farshad
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (12) : 14413 - 14423
  • [7] Goal- Driven Autonomous Exploration Through Deep Reinforcement Learning
    Cimurs, Reinis
    Suh, Il Hong
    Lee, Jin Han
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02): : 730 - 737
  • [8] Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning on Graphs
    Chen, Fanfei
    Martin, John D.
    Huang, Yewei
    Wang, Jinkun
    Englot, Brendan
    [J]. 2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 6140 - 6147
  • [9] ATENA: An Autonomous System for Data Exploration Based on Deep Reinforcement Learning
    Bar, El Ori
    Milo, Tova
    Somech, Amit
    [J]. PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 2873 - 2876
  • [10] Cooperative Deep Reinforcement Learning Policies for Autonomous Navigation in Complex Environments
    Tran, Van Manh
    Kim, Gon-Woo
    [J]. IEEE ACCESS, 2024, 12 : 101053 - 101065