Offline Reinforcement Learning as Anti-exploration

被引:0
|
作者
Rezaeifar, Shideh [1 ]
Dadashi, Robert [2 ]
Vieillard, Nino [2 ,3 ]
Hussenot, Leonard [2 ,4 ]
Bachem, Olivier [2 ]
Pietquin, Olivier [2 ]
Geist, Matthieu [2 ]
机构
[1] Univ Geneva, Geneva, Switzerland
[2] Google Res, Brain Team, Mountain View, CA USA
[3] Univ Lorraine, CNRS, INRIA, IECL, F-54000 Nancy, France
[4] Univ Lille, CNRS, INRIA, UMR 9189,CRIStAL, Villeneuve Dascq, France
关键词
ALGORITHM;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors such actions. We thus take inspiration from the literature on bonus-based exploration to design a new offline RL agent. The core idea is to subtract a prediction-based exploration bonus from the reward, instead of adding it for exploration. This allows the policy to stay close to the support of the dataset, and practically extends some previous pessimism-based offline RL methods to a deep learning setting with arbitrary bonuses. We also connect this approach to a more common regularization of the learned policy towards the data. Instantiated with a bonus based on the prediction error of a variational autoencoder, we show that our simple agent is competitive with the state of the art on a set of continuous control locomotion and manipulation tasks.
引用
收藏
页码:8106 / 8114
页数:9
相关论文
共 50 条
  • [41] On the Role of Discount Factor in Offline Reinforcement Learning
    Hu, Hao
    Yang, Yiqing
    Zhao, Qianchuan
    Zhang, Chongjie
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [42] Efficient Offline Reinforcement Learning With Relaxed Conservatism
    Huang, Longyang
    Dong, Botao
    Zhang, Weidong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5260 - 5272
  • [43] Is Pessimism Provably Efficient for Offline Reinforcement Learning?
    Jin, Ying
    Yang, Zhuoran
    Wang, Zhaoran
    MATHEMATICS OF OPERATIONS RESEARCH, 2024,
  • [44] Federated Offline Reinforcement Learning With Multimodal Data
    Wen, Jiabao
    Dai, Huiao
    He, Jingyi
    Xi, Meng
    Xiao, Shuai
    Yang, Jiachen
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 4266 - 4276
  • [45] Offline Evaluation of Online Reinforcement Learning Algorithms
    Mandel, Travis
    Liu, Yun-En
    Brunskill, Emma
    Popovic, Zoran
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1926 - 1933
  • [46] Offline Reinforcement Learning With Behavior Value Regularization
    Huang, Longyang
    Dong, Botao
    Xie, Wei
    Zhang, Weidong
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (06) : 3692 - 3704
  • [47] Percentile Criterion Optimization in Offline Reinforcement Learning
    Lobo, Elita A.
    Cousins, Cyrus
    Zick, Yair
    Petrik, Marek
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [48] Efficient Online Reinforcement Learning with Offline Data
    Ball, Philip J.
    Smith, Laura
    Kostrikov, Ilya
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [49] Advancing RAN Slicing with Offline Reinforcement Learning
    Yang, Kun
    Yeh, Shu-ping
    Zhang, Menglei
    Sydir, Jerry
    Yang, Jing
    Shen, Cong
    2024 IEEE INTERNATIONAL SYMPOSIUM ON DYNAMIC SPECTRUM ACCESS NETWORKS, DYSPAN 2024, 2024, : 331 - 338
  • [50] Offline Quantum Reinforcement Learning in a Conservative Manner
    Cheng, Zhihao
    Zhang, Kaining
    Shen, Li
    Tao, Dacheng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7148 - 7156