Sample-efficient Reinforcement Learning Representation Learning with Curiosity Contrastive Forward Dynamics Model

被引:9
|
作者
Nguyen, Thanh [1 ]
Luu, Tung M. [1 ]
Vu, Thang [1 ]
Yoo, Chang D. [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Fac Elect Engn, Daejeon 34141, South Korea
关键词
LEVEL; GO;
D O I
10.1109/IROS51168.2021.9636536
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Developing an agent in reinforcement learning (RL) that is capable of performing complex control tasks directly from high-dimensional observation such as raw pixels is a challenge as efforts still need to be made towards improving sample efficiency and generalization of RL algorithm. This paper considers a learning framework for a Curiosity Contrastive Forward Dynamics Model (CCFDM) to achieve a more sample-efficient RL based directly on raw pixels. CCFDM incorporates a forward dynamics model (FDM) and performs contrastive learning to train its deep convolutional neural network-based image encoder (IE) to extract conducive spatial and temporal information to achieve a more sample efficiency for RL. In addition, during training, CCFDM provides intrinsic rewards, produced based on FDM prediction error, and encourages the curiosity of the RL agent to improve exploration. The diverge and less-repetitive observations provided by both our exploration strategy and data augmentation available in contrastive learning improve not only the sample efficiency but also the generalization. Performance of existing model-free RL methods such as Soft Actor-Critic built on top of CCFDM outperforms prior state-of-the-art pixel-based RL methods on the DeepMind Control Suite benchmark.
引用
收藏
页码:3471 / 3477
页数:7
相关论文
共 50 条
  • [21] Sample-Efficient Reinforcement Learning of Partially Observable Markov Games
    Liu, Qinghua
    Szepesvari, Csaba
    Jin, Chi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [22] Sample-Efficient Learning of Mixtures
    Ashtiani, Hassan
    Ben-David, Shai
    Mehrabian, Abbas
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 2679 - 2686
  • [23] Sample-Efficient Deep Reinforcement Learning with Directed Associative Graph
    Dujia Yang
    Xiaowei Qin
    Xiaodong Xu
    Chensheng Li
    Guo Wei
    China Communications, 2021, 18 (06) : 100 - 113
  • [24] Hard Negative Sample Mining for Contrastive Representation in Reinforcement Learning
    Chen, Qihang
    Liang, Dayang
    Liu, Yunlong
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT II, 2022, 13281 : 277 - 288
  • [25] Reinforcement Learning Boat Autopilot: A Sample-efficient and Model Predictive Control based Approach
    Cui, Yunduan
    Osaki, Shigeki
    Matsubara, Takamitsu
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 2868 - 2875
  • [26] Sample-Efficient Model-Free Reinforcement Learning with Off-Policy Critics
    Steckelmacher, Denis
    Plisnier, Helene
    Roijers, Diederik M.
    Nowe, Ann
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT III, 2020, 11908 : 19 - 34
  • [27] Masked Contrastive Representation Learning for Reinforcement Learning
    Zhu, Jinhua
    Xia, Yingce
    Wu, Lijun
    Deng, Jiajun
    Zhou, Wengang
    Qin, Tao
    Liu, Tie-Yan
    Li, Houqiang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3421 - 3433
  • [28] Relative Entropy Regularized Sample-Efficient Reinforcement Learning With Continuous Actions
    Shang, Zhiwei
    Li, Renxing
    Zheng, Chunhua
    Li, Huiyun
    Cui, Yunduan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 11
  • [29] TEXPLORE: real-time sample-efficient reinforcement learning for robots
    Hester, Todd
    Stone, Peter
    MACHINE LEARNING, 2013, 90 (03) : 385 - 429
  • [30] Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning
    Xie, Tengyang
    Jiang, Nan
    Wang, Huan
    Xiong, Caiming
    Bai, Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34