Batch Reinforcement Learning With a Nonparametric Off-Policy Policy Gradient

被引:2
|
作者
Tosatto, Samuele [1 ]
Carvalho, Joao [2 ]
Peters, Jan [2 ]
机构
[1] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2R3, Canada
[2] Tech Univ Darmstadt, FG Intelligent Autonomous Syst, D-64289 Darmstadt, Germany
关键词
Mathematical model; Estimation; Kernel; Reinforcement learning; Monte Carlo methods; Task analysis; Closed-form solutions; policy gradient; nonparametric estimation; ITERATION;
D O I
10.1109/TPAMI.2021.3088063
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Off-policy reinforcement learning (RL) holds the promise of better data efficiency as it allows sample reuse and potentially enables safe interaction with the environment. Current off-policy policy gradient methods either suffer from high bias or high variance, delivering often unreliable estimates. The price of inefficiency becomes evident in real-world scenarios such as interaction-driven robot learning, where the success of RL has been rather limited, and a very high sample cost hinders straightforward application. In this paper, we propose a nonparametric Bellman equation, which can be solved in closed form. The solution is differentiable w.r.t the policy parameters and gives access to an estimation of the policy gradient. In this way, we avoid the high variance of importance sampling approaches, and the high bias of semi-gradient methods. We empirically analyze the quality of our gradient estimate against state-of-the-art methods, and show that it outperforms the baselines in terms of sample efficiency on classical control tasks.
引用
收藏
页码:5996 / 6010
页数:15
相关论文
共 50 条
  • [21] Flexible Data Augmentation in Off-Policy Reinforcement Learning
    Rak, Alexandra
    Skrynnik, Alexey
    Panov, Aleksandr I.
    [J]. ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING (ICAISC 2021), PT I, 2021, 12854 : 224 - 235
  • [22] Off-Policy Deep Reinforcement Learning without Exploration
    Fujimoto, Scott
    Meger, David
    Precup, Doina
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [23] Mixed experience sampling for off-policy reinforcement learning
    Yu, Jiayu
    Li, Jingyao
    Lu, Shuai
    Han, Shuai
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 251
  • [24] Research on Off-Policy Evaluation in Reinforcement Learning: A Survey
    Wang, Shuo-Ru
    Niu, Wen-Jia
    Tong, En-Dong
    Chen, Tong
    Li, He
    Tian, Yun-Zhe
    Liu, Ji-Qiang
    Han, Zhen
    Li, Yi-Dong
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2022, 45 (09): : 1926 - 1945
  • [25] Off-Policy Reinforcement Learning for H∞ Control Design
    Luo, Biao
    Wu, Huai-Ning
    Huang, Tingwen
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2015, 45 (01) : 65 - 76
  • [26] Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation
    Tang, Yunhao
    Kozuno, Tadashi
    Rowland, Mark
    Munos, Remi
    Valko, Michal
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [27] Unified Off-Policy Learning to Rank: a Reinforcement Learning Perspective
    Zhang, Zeyu
    Su, Yi
    Yuan, Hui
    Wu, Yiran
    Balasubramanian, Rishab
    Wu, Qingyun
    Wang, Huazheng
    Wang, Mengdi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [28] REINFORCEMENT LEARNING FOR SPOKEN DIALOGUE SYSTEMS USING OFF-POLICY NATURAL GRADIENT METHOD
    Jurcicek, Filip
    [J]. 2012 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2012), 2012, : 7 - 12
  • [29] Distributed off-Policy Actor-Critic Reinforcement Learning with Policy Consensus
    Zhang, Yan
    Zavlanos, Michael M.
    [J]. 2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 4674 - 4679
  • [30] Off-Policy Deep Reinforcement Learning by Bootstrapping the Covariate Shift
    Gelada, Carles
    Bellemare, Marc G.
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3647 - 3655