Network Parameter Setting for Reinforcement Learning Approaches Using Neural Networks

被引:2
|
作者
Yamada, Kazuaki [1 ]
机构
[1] Toyo Univ, Fac Sci & Engn, Dept Mech Engn, 2100 Kujirai, Kawagoe, Saitama 3508585, Japan
关键词
reinforcement learning; artificial neural networks; autonomous mobile robot;
D O I
10.20965/jaciii.2011.p0822
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning approaches are attracting attention as a technique for constructing a trial-and-error mapping function between sensors and motors of an autonomous mobile robot. Conventional reinforcement learning approaches use a look-up table to express the mapping function between grid state and grid action spaces. The grid size greatly adversely affects the learning performance of reinforcement learning algorithms. To avoid this, researchers have proposed reinforcement learning algorithms using neural networks to express the mapping function between continuous state space and action. A designer, however, must set the number of middle neurons and initial values of weight parameters appropriately to improve the approximate accuracy of neural networks. This paper proposes a new method that automatically sets the number of middle neurons and initial values of weight parameters based on the dimension number of the sensor space. The feasibility of proposed method is demonstrated using an autonomous mobile robot navigation problem and is evaluated by comparing it with two types of Q-learning as follows: Q-learning using RBF networks and Q-learning using neural networks whose parameters are set by a designer.
引用
收藏
页码:822 / 830
页数:9
相关论文
共 50 条
  • [41] Improving system reliability using a neural network model with reinforcement learning
    Fourie, CJ
    QUALITY, RELIABILITY, AND MAINTENANCE, 2004, : 239 - 242
  • [42] Reinforcement Learning Using the Stochastic Fuzzy Min–Max Neural Network
    Aristidis Likas
    Neural Processing Letters, 2001, 13 : 213 - 220
  • [43] Performance optimization of function localization neural network by using reinforcement learning
    Sasakawa, T
    Hu, JL
    Hirasawa, K
    Proceedings of the International Joint Conference on Neural Networks (IJCNN), Vols 1-5, 2005, : 1314 - 1319
  • [44] Adaptive neural network control of robot manipulator using reinforcement learning
    Tang, Li
    Liu, Yan-Jun
    JOURNAL OF VIBRATION AND CONTROL, 2014, 20 (14) : 2162 - 2171
  • [45] Intelligent scheduling using a neural network model in conjunction with reinforcement learning
    Fourie, CJ
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART B-JOURNAL OF ENGINEERING MANUFACTURE, 2005, 219 (02) : 229 - 235
  • [46] Category learning in a recurrent neural network with reinforcement learning
    Zhang, Ying
    Pan, Xiaochuan
    Wang, Yihong
    FRONTIERS IN PSYCHIATRY, 2022, 13
  • [47] CAPTURING THE BRACHISTOCHRONE: NEURAL NETWORK SUPERVISED AND REINFORCEMENT APPROACHES
    Abu Zitar, Raed
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2019, 15 (05): : 1747 - 1761
  • [48] USING MODULAR NEURAL NETWORKS AND MACHINE LEARNING WITH REINFORCEMENT LEARNING TO SOLVE CLASSIFICATION PROBLEMS
    Leoshchenko, S. D.
    Oliinyk, A. O.
    Subbotin, S. A.
    Kolpakova, T. O.
    RADIO ELECTRONICS COMPUTER SCIENCE CONTROL, 2024, (02) : 71 - 81
  • [49] A Learning Social Network with Recognition of Learning Styles Using Neural Networks
    Zatarain-Cabada, Ramon
    Barron-Estrada, M. L.
    Ponce Angulo, Viridiana
    Jose Garcia, Adan
    Reyes Garcia, Carlos A.
    ADVANCES IN PATTERN RECOGNITION, 2010, 6256 : 199 - +
  • [50] On the Expressivity of Neural Networks for Deep Reinforcement Learning
    Dong, Kefan
    Luo, Yuping
    Yu, Tianhe
    Finn, Chelsea
    Ma, Tengyu
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,