Representations for Stable Off-Policy Reinforcement Learning

被引:0
|
作者
Ghosh, Dibya [1 ]
Bellemare, Marc G. [1 ]
机构
[1] Google Res, Mountain View, CA 94043 USA
关键词
FRAMEWORK;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reinforcement learning with function approximation can be unstable and even divergent, especially when combined with off-policy learning and Bellman updates. In deep reinforcement learning, these issues have been dealt with empirically by adapting and regularizing the representation, in particular with auxiliary tasks. This suggests that representation learning may provide a means to guarantee stability. In this paper, we formally show that there are indeed nontrivial state representations under which the canonical TD algorithm is stable, even when learning off-policy. We analyze representation learning schemes that are based on the transition matrix of a policy, such as proto-value functions, along three axes: approximation error, stability, and ease of estimation. In the most general case, we show that a Schur basis provides convergence guarantees, but is difficult to estimate from samples. For a fixed reward function, we find that an orthogonal basis of the corresponding Krylov subspace is an even better choice. We conclude by empirically demonstrating that these stable representations can be learned using stochastic gradient descent, opening the door to improved techniques for representation learning with deep networks.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Off-policy evaluation for tabular reinforcement learning with synthetic trajectories
    Weiwei Wang
    Yuqiang Li
    Xianyi Wu
    Statistics and Computing, 2024, 34
  • [32] Regret Minimization Experience Replay in Off-Policy Reinforcement Learning
    Liu, Xu-Hui
    Xue, Zhenghai
    Pang, Jing-Cheng
    Jiang, Shengyi
    Xu, Feng
    Yu, Yang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Regularized Anderson Acceleration for Off-Policy Deep Reinforcement Learning
    Shi, Wenjie
    Song, Shiji
    Wu, Hui
    Hsu, Ya-Chu
    Wu, Cheng
    Huang, Gao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [34] Stabilizing Off-Policy Deep Reinforcement Learning from Pixels
    Cetin, Edoardo
    Ball, Philip J.
    Roberts, Steve
    Celiktutan, Oya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [35] Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation
    Kallus, Nathan
    Uehara, Masatoshi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [36] Trajectory-Based Off-Policy Deep Reinforcement Learning
    Doerr, Andreas
    Volpp, Michael
    Toussaint, Marc
    Trimpe, Sebastian
    Daniel, Christian
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [37] Balanced prioritized experience replay in off-policy reinforcement learning
    Lou Z.
    Wang Y.
    Shan S.
    Zhang K.
    Wei H.
    Neural Computing and Applications, 2024, 36 (25) : 15721 - 15737
  • [38] Off-policy evaluation for tabular reinforcement learning with synthetic trajectories
    Wang, Weiwei
    Li, Yuqiang
    Wu, Xianyi
    STATISTICS AND COMPUTING, 2024, 34 (01)
  • [39] Enhanced Off-Policy Reinforcement Learning With Focused Experience Replay
    Kong, Seung-Hyun
    Nahrendra, I. Made Aswin
    Paek, Dong-Hee
    IEEE ACCESS, 2021, 9 (09): : 93152 - 93164
  • [40] Doubly Robust Off-policy Value Evaluation for Reinforcement Learning
    Jiang, Nan
    Li, Lihong
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48