Network Parameter Setting for Reinforcement Learning Approaches Using Neural Networks

被引:2
|
作者
Yamada, Kazuaki [1 ]
机构
[1] Toyo Univ, Fac Sci & Engn, Dept Mech Engn, 2100 Kujirai, Kawagoe, Saitama 3508585, Japan
关键词
reinforcement learning; artificial neural networks; autonomous mobile robot;
D O I
10.20965/jaciii.2011.p0822
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning approaches are attracting attention as a technique for constructing a trial-and-error mapping function between sensors and motors of an autonomous mobile robot. Conventional reinforcement learning approaches use a look-up table to express the mapping function between grid state and grid action spaces. The grid size greatly adversely affects the learning performance of reinforcement learning algorithms. To avoid this, researchers have proposed reinforcement learning algorithms using neural networks to express the mapping function between continuous state space and action. A designer, however, must set the number of middle neurons and initial values of weight parameters appropriately to improve the approximate accuracy of neural networks. This paper proposes a new method that automatically sets the number of middle neurons and initial values of weight parameters based on the dimension number of the sensor space. The feasibility of proposed method is demonstrated using an autonomous mobile robot navigation problem and is evaluated by comparing it with two types of Q-learning as follows: Q-learning using RBF networks and Q-learning using neural networks whose parameters are set by a designer.
引用
收藏
页码:822 / 830
页数:9
相关论文
共 50 条
  • [31] A reinforcement learning algorithm for a class of dynamical environments using neural networks
    Murata, M
    Ozawa, S
    SICE 2003 ANNUAL CONFERENCE, VOLS 1-3, 2003, : 2004 - 2009
  • [32] Accelerated reinforcement learning control using modified CMAC neural networks
    Xu, X
    Hu, DW
    He, HG
    ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 2575 - 2578
  • [33] A Transfer Approach Using Graph Neural Networks in Deep Reinforcement Learning
    Yang, Tianpei
    You, Heng
    Hao, Jianye
    Zheng, Yan
    Taylor, Matthew E.
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16352 - 16360
  • [34] On-line reinforcement learning using cascade constructive neural networks
    Vamplew, P
    Ollington, R
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 3, PROCEEDINGS, 2005, 3683 : 562 - 568
  • [35] Attitude Control of a Nanosatellite system using Reinforcement Learning and Neural Networks
    Yadava, Deigant
    Hosangadi, Raunak
    Krishna, Sai
    Paliwal, Pranjal
    Jain, Avi
    2018 IEEE AEROSPACE CONFERENCE, 2018,
  • [36] Incorporating Expert Advice into Reinforcement Learning Using Constructive Neural Networks
    Ollington, Robert
    Vamplew, Peter
    Swanson, John
    CONSTRUCTIVE NEURAL NETWORKS, 2009, 258 : 207 - +
  • [37] Theoretical analysis and parameter setting of Hopfield neural networks
    Qu, H
    Yi, Z
    Xiang, XL
    ADVANCES IN NEURAL NETWORKS - ISNN 2005, PT 1, PROCEEDINGS, 2005, 3496 : 739 - 744
  • [38] Influence of the Chaotic Property on Reinforcement Learning Using a Chaotic Neural Network
    Goto, Yuki
    Shibata, Katsunari
    NEURAL INFORMATION PROCESSING, ICONIP 2017, PT I, 2017, 10634 : 759 - 767
  • [39] Cable SCARA Robot Controlled by a Neural Network Using Reinforcement Learning
    Okabe, Eduardo
    Paiva, Victor
    Silva-Teixeira, Luis H.
    Izuka, Jaime
    JOURNAL OF COMPUTATIONAL AND NONLINEAR DYNAMICS, 2023, 18 (10):
  • [40] Emergence of Higher Exploration in Reinforcement Learning Using a Chaotic Neural Network
    Goto, Yuki
    Shibata, Katsunari
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT I, 2016, 9947 : 40 - 48