Network Parameter Setting for Reinforcement Learning Approaches Using Neural Networks

被引:2
|
作者
Yamada, Kazuaki [1 ]
机构
[1] Toyo Univ, Fac Sci & Engn, Dept Mech Engn, 2100 Kujirai, Kawagoe, Saitama 3508585, Japan
关键词
reinforcement learning; artificial neural networks; autonomous mobile robot;
D O I
10.20965/jaciii.2011.p0822
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning approaches are attracting attention as a technique for constructing a trial-and-error mapping function between sensors and motors of an autonomous mobile robot. Conventional reinforcement learning approaches use a look-up table to express the mapping function between grid state and grid action spaces. The grid size greatly adversely affects the learning performance of reinforcement learning algorithms. To avoid this, researchers have proposed reinforcement learning algorithms using neural networks to express the mapping function between continuous state space and action. A designer, however, must set the number of middle neurons and initial values of weight parameters appropriately to improve the approximate accuracy of neural networks. This paper proposes a new method that automatically sets the number of middle neurons and initial values of weight parameters based on the dimension number of the sensor space. The feasibility of proposed method is demonstrated using an autonomous mobile robot navigation problem and is evaluated by comparing it with two types of Q-learning as follows: Q-learning using RBF networks and Q-learning using neural networks whose parameters are set by a designer.
引用
收藏
页码:822 / 830
页数:9
相关论文
共 50 条
  • [1] Network parameter setting for reinforcement learning approaches using neural networks
    Yamada, Kazuaki
    Ohkura, Kazuhiro
    Nihon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C, 2012, 78 (792): : 2950 - 2961
  • [2] Optimizing parameter settings for hopfield neural networks using reinforcement learning
    Rbihou, Safae
    Joudar, Nour-Eddine
    Haddouch, Khalid
    EVOLVING SYSTEMS, 2024, 15 (06) : 2419 - 2440
  • [3] Particle Swarm Algorithm Setting using Deep Reinforcement Learning in the Artificial Neural Network Optimization Learning Process
    Fihri, Abdelkader Fassi
    Hajji, Tarik
    El Hassani, Ibtissam
    Masrour, Tawfik
    IAENG International Journal of Computer Science, 2024, 51 (08) : 1195 - 1208
  • [4] Incremental Learning of Neural Network Classifiers Using Reinforcement Learning
    Bose, Sourabh
    Huber, Manfred
    2016 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2016, : 2097 - 2103
  • [5] Workflow scheduling using Neural Networks and Reinforcement Learning
    Melnik, Mikhail
    Nasonov, Denis
    8TH INTERNATIONAL YOUNG SCIENTISTS CONFERENCE ON COMPUTATIONAL SCIENCE, YSC2019, 2019, 156 : 29 - 36
  • [6] Reinforcement Learning for Neural Networks using Swarm Intelligence
    Conforth, Matthew
    Meng, Yan
    2008 IEEE SWARM INTELLIGENCE SYMPOSIUM, 2008, : 89 - 95
  • [7] Prespeech motor learning in a neural network using reinforcement
    Warlaumont, Anne S.
    Westermann, Gert
    Buder, Eugene H.
    Oller, D. Kimbrough
    NEURAL NETWORKS, 2013, 38 : 64 - 75
  • [8] Spiking Neural Networks with Different Reinforcement Learning (RL) Schemes in a Multiagent Setting
    Christodoulou, Chris
    Cleanthous, Aristodemos
    CHINESE JOURNAL OF PHYSIOLOGY, 2010, 53 (06): : 447 - 453
  • [9] Cascade Attribute Network: Decomposing Reinforcement Learning Control Policies using Hierarchical Neural Networks
    Chang, Haonan
    Xu, Zhuo
    Tomizuka, Masayoshi
    IFAC PAPERSONLINE, 2020, 53 (02): : 8181 - 8186
  • [10] Self-adapting WIP parameter setting using deep reinforcement learning
    De Andrade e Silva, Manuel Tome
    Azevedo, Americo
    COMPUTERS & OPERATIONS RESEARCH, 2022, 144