Sample-efficient Reinforcement Learning Representation Learning with Curiosity Contrastive Forward Dynamics Model

被引:9
|
作者
Nguyen, Thanh [1 ]
Luu, Tung M. [1 ]
Vu, Thang [1 ]
Yoo, Chang D. [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Fac Elect Engn, Daejeon 34141, South Korea
关键词
LEVEL; GO;
D O I
10.1109/IROS51168.2021.9636536
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Developing an agent in reinforcement learning (RL) that is capable of performing complex control tasks directly from high-dimensional observation such as raw pixels is a challenge as efforts still need to be made towards improving sample efficiency and generalization of RL algorithm. This paper considers a learning framework for a Curiosity Contrastive Forward Dynamics Model (CCFDM) to achieve a more sample-efficient RL based directly on raw pixels. CCFDM incorporates a forward dynamics model (FDM) and performs contrastive learning to train its deep convolutional neural network-based image encoder (IE) to extract conducive spatial and temporal information to achieve a more sample efficiency for RL. In addition, during training, CCFDM provides intrinsic rewards, produced based on FDM prediction error, and encourages the curiosity of the RL agent to improve exploration. The diverge and less-repetitive observations provided by both our exploration strategy and data augmentation available in contrastive learning improve not only the sample efficiency but also the generalization. Performance of existing model-free RL methods such as Soft Actor-Critic built on top of CCFDM outperforms prior state-of-the-art pixel-based RL methods on the DeepMind Control Suite benchmark.
引用
收藏
页码:3471 / 3477
页数:7
相关论文
共 50 条
  • [31] TEXPLORE: real-time sample-efficient reinforcement learning for robots
    Todd Hester
    Peter Stone
    Machine Learning, 2013, 90 : 385 - 429
  • [32] Augmented Memory: Sample-Efficient Generative Molecular Design with Reinforcement Learning
    Guo, Jeff
    Schwaller, Philippe
    JACS AU, 2024, 4 (06): : 2160 - 2172
  • [33] Sample-efficient multi-agent reinforcement learning with masked reconstruction
    Kim, Jung In
    Lee, Young Jae
    Heo, Jongkook
    Park, Jinhyeok
    Kim, Jaehoon
    Lim, Sae Rin
    Jeong, Jinyong
    Kim, Seoung Bum
    PLOS ONE, 2023, 18 (09):
  • [34] Sample-Efficient Deep Reinforcement Learning via Episodic Backward Update
    Lee, Su Young
    Choi, Sungik
    Chung, Sae-Young
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [35] Sample-efficient Adversarial Imitation Learning
    Jung, Dahuin
    Lee, Hyungyu
    Yoon, Sungroh
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [36] Sample-efficient Adversarial Imitation Learning
    Jung, Dahuin
    Lee, Hyungyu
    Yoon, Sungroh
    Journal of Machine Learning Research, 2024, 25 : 1 - 32
  • [37] Sample-Efficient Reinforcement Learning via Conservative Model-Based Actor-Critic
    Wang, Zhihai
    Wang, Jie
    Zhou, Qi
    Li, Bin
    Li, Houqiang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8612 - 8620
  • [38] Sample-efficient Adversarial Imitation Learning
    Jung, Dahuin
    Lee, Hyungyu
    Yoon, Sungroh
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 32
  • [39] On Sample-Efficient Offline Reinforcement Learning: Data Diversity, Posterior Sampling, and Beyond
    Nguyen-Tang, Thanh
    Arora, Raman
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [40] Ship course-keeping in waves using sample-efficient reinforcement learning
    Greep, Justin
    Bayezit, Afşin Baran
    Mak, Bart
    Rijpkema, Douwe
    Kınacı, Ömer Kemal
    Düz, Bülent
    Engineering Applications of Artificial Intelligence, 2025, 141