Gradient Monitored Reinforcement Learning

被引:4
|
作者
Abdul Hameed, Mohammed Sharafath [1 ]
Chadha, Gavneet Singh [1 ]
Schwung, Andreas [1 ]
Ding, Steven X. [2 ]
机构
[1] South Westphalia Univ Appl Sci, Dept Automat Technol, D-59494 Soest, Germany
[2] Univ Duisburg Essen, Dept Automat Control & Complex Syst, D-47057 Duisburg, Germany
基金
美国国家卫生研究院;
关键词
Training; Monitoring; Neural networks; Reinforcement learning; Optimization; Games; Task analysis; Atari games; deep neural networks (DNNs); gradient monitoring (GM); MuJoCo; multirobot coordination; OpenAI GYM; reinforcement learning (RL);
D O I
10.1109/TNNLS.2021.3119853
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This article presents a novel neural network training approach for faster convergence and better generalization abilities in deep reinforcement learning (RL). Particularly, we focus on the enhancement of training and evaluation performance in RL algorithms by systematically reducing gradient's variance and, thereby, providing a more targeted learning process. The proposed method, which we term gradient monitoring (GM), is a method to steer the learning in the weight parameters of a neural network based on the dynamic development and feedback from the training process itself. We propose different variants of the GM method that we prove to increase the underlying performance of the model. One of the proposed variants, momentum with GM (M-WGM), allows for a continuous adjustment of the quantum of backpropagated gradients in the network based on certain learning parameters. We further enhance the method with the adaptive M-WGM (AM-WGM) method, which allows for automatic adjustment between focused learning of certain weights versus more dispersed learning depending on the feedback from the rewards collected. As a by-product, it also allows for automatic derivation of the required deep network sizes during training as the method automatically freezes trained weights. The method is applied to two discrete (real-world multirobot coordination problems and Atari games) and one continuous control task (MuJoCo) using advantage actor-critic (A2C) and proximal policy optimization (PPO), respectively. The results obtained particularly underline the applicability and performance improvements of the methods in terms of generalization capability.
引用
收藏
页码:4106 / 4119
页数:14
相关论文
共 50 条
  • [1] Gradient Monitored Reinforcement Learning for Jamming Attack Detection in FANETs
    Ghelani, Jaimin
    Gharia, Prayagraj
    El-Ocla, Hosam
    IEEE ACCESS, 2024, 12 : 23081 - 23095
  • [2] Gradient dynamics in reinforcement learning
    Fabbricatore, Riccardo
    V. Palyulin, Vladimir
    PHYSICAL REVIEW E, 2022, 106 (02)
  • [3] Knowledge Gradient for Online Reinforcement Learning
    Yahyaa, Saba
    Manderick, Bernard
    AGENTS AND ARTIFICIAL INTELLIGENCE, ICAART 2014, 2015, 8946 : 103 - 118
  • [4] Meta-Gradient Reinforcement Learning
    Xu, Zhongwen
    van Hasselt, Hado
    Silver, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [5] Gradient estimation in dendritic reinforcement learning
    Schiess, Mathieu
    Urbanczik, Robert
    Senn, Walter k
    JOURNAL OF MATHEMATICAL NEUROSCIENCE, 2012, 2
  • [6] Gradient descent for general reinforcement learning
    Baird, L
    Moore, A
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 11, 1999, 11 : 968 - 974
  • [7] Policy gradient fuzzy reinforcement learning
    Wang, XN
    Xu, X
    He, HG
    PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2004, : 992 - 995
  • [8] A modification of gradient policy in reinforcement learning procedure
    Abas, Marcel
    Skripcak, Tomas
    2012 15TH INTERNATIONAL CONFERENCE ON INTERACTIVE COLLABORATIVE LEARNING (ICL), 2012,
  • [9] The delay-of-reinforcement gradient in maze learning
    Seward, JP
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1942, 30 (06): : 464 - 474
  • [10] The gradient of the reinforcement landscape influences sensorimotor learning
    Cashaback, Joshua G. A.
    Lao, Christopher K.
    Palidis, Dimitrios J.
    Coltman, Susan K.
    McGregor, Heather R.
    Gribble, Paul L.
    PLOS COMPUTATIONAL BIOLOGY, 2019, 15 (03)