A Q-learning approach based on human reasoning for navigation in a dynamic environment

被引:7
|
作者
Yuan, Rupeng [1 ]
Zhang, Fuhai [1 ]
Wang, Yu [1 ]
Fu, Yili [1 ]
Wang, Shuguo [1 ]
机构
[1] Harbin Inst Technol, State Key Lab Robot & Syst, Harbin 150001, Heilongjiang, Peoples R China
基金
中国国家自然科学基金; 黑龙江省自然科学基金;
关键词
Autonomous navigation; Mobile robot; Dynamic environment; Q-learning; OBSTACLE AVOIDANCE; ROBOTS;
D O I
10.1017/S026357471800111X
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
A Q-learning approach is often used for navigation in static environments where state space is easy to define. In this paper, a new Q-learning approach is proposed for navigation in dynamic environments by imitating human reasoning. As a model-free method, a Q-learning method does not require the environmental model in advance. The state space and the reward function in the proposed approach are defined according to human perception and evaluation, respectively. Specifically, approximate regions instead of accurate measurements are used to define states. Moreover, due to the limitation of robot dynamics, actions for each state are calculated by introducing a dynamic window that takes robot dynamics into account. The conducted tests show that the obstacle avoidance rate of the proposed approach can reach 90.5% after training, and the robot can always operate below the dynamics limitation.
引用
收藏
页码:445 / 468
页数:24
相关论文
共 50 条
  • [1] Mobile robot navigation based on improved CA-CMAC and Q-learning in dynamic environment
    Li Guo-jin
    Chen Shuang
    Xiao Zhu-li
    Dong Di-yong
    [J]. 2015 34TH CHINESE CONTROL CONFERENCE (CCC), 2015, : 5020 - 5024
  • [2] Multi-agent Q-learning Based Navigation in an Unknown Environment
    Nath, Amar
    Niyogi, Rajdeep
    Singh, Tajinder
    Kumar, Virendra
    [J]. ADVANCED INFORMATION NETWORKING AND APPLICATIONS, AINA-2022, VOL 1, 2022, 449 : 330 - 340
  • [3] Autonomous Navigation based on a Q-learning algorithm for a Robot in a Real Environment
    Strauss, Clement
    Sahin, Ferat
    [J]. 2008 IEEE INTERNATIONAL CONFERENCE ON SYSTEM OF SYSTEMS ENGINEERING (SOSE), 2008, : 361 - 365
  • [4] Q-learning with Experience Replay in a Dynamic Environment
    Pieters, Mathijs
    Wiering, Marco A.
    [J]. PROCEEDINGS OF 2016 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2016,
  • [5] Q-learning Approach in the Context of Virtual Learning Environment
    Liviu, Ionita
    Irina, Tudor
    [J]. PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON VIRTUAL LEARNING, 2008, : 209 - 214
  • [6] Q-Learning Based on Dynamical Structure Neural Network for Robot Navigation in Unknown Environment
    Qiao, Junfei
    Fan, Ruiyuan
    Han, Honggui
    Ruan, Xiaogang
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2009, PT 3, PROCEEDINGS, 2009, 5553 : 188 - 196
  • [7] A Path-Planning Approach Based on Potential and Dynamic Q-Learning for Mobile Robots in Unknown Environment
    Hao, Bing
    Du, He
    Zhao, Jianshuo
    Zhang, Jiamin
    Wang, Qi
    [J]. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [8] Mobile robot Navigation Based on Q-Learning Technique
    Khriji, Lazhar
    Touati, Farid
    Benhmed, Kamel
    Al-Yahmedi, Amur
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2011, 8 (01): : 45 - 51
  • [9] Neural Q-Learning Based Mobile Robot Navigation
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    Joe, Halim Kusuma
    [J]. MATERIALS SCIENCE AND INFORMATION TECHNOLOGY, PTS 1-8, 2012, 433-440 : 721 - +
  • [10] Based on A* and q-learning search and rescue robot navigation
    Pang, Tao
    Ruan, Xiaogang
    Wang, Ershen
    Fan, Ruiyuan
    [J]. Telkomnika - Indonesian Journal of Electrical Engineering, 2012, 10 (07): : 1889 - 1896