Reinforcement Learning with Augmented Data

被引:0
|
作者
Laskin, Michael [1 ]
Lee, Kimin [1 ]
Stooke, Adam [1 ]
Pinto, Lerrel [2 ]
Abbeel, Pieter [1 ]
Srinivas, Aravind [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] New York Univ, New York, NY USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020 | 2020年 / 33卷
关键词
LEVEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from visual observations is a fundamental yet challenging problem in Reinforcement Learning (RL). Although algorithmic advances combined with convolutional neural networks have proved to be a recipe for success, current methods are still lacking on two fronts: (a) data-efficiency of learning and (b) generalization to new environments. To this end, we present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms. We perform the first extensive study of general data augmentations for RL on both pixel-based and state-based inputs, and introduce two new data augmentations - random translate and random amplitude scale. We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods across common benchmarks. RAD sets a new state-of-the-art in terms of data-efficiency and final performance on the DeepMind Control Suite benchmark for pixel-based control as well as OpenAI Gym benchmark for state-based control. We further demonstrate that RAD significantly improves test-time generalization over existing methods on several OpenAI ProcGen benchmarks. Our RAD module and training code are available at https://www.github.com/MishaLaskin/rad.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Learning with Augmented Class by Exploiting Unlabeled Data
    Da, Qing
    Yu, Yang
    Zhou, Zhi-Hua
    PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 1760 - 1766
  • [42] Learning in practice: reinforcement learning-based traffic signal control augmented with actuated control
    Lu, Yunxue
    Li, Changze
    Wang, Hao
    TRANSPORTATION PLANNING AND TECHNOLOGY, 2024,
  • [43] Sample-Efficient Learning to Solve a Real-World Labyrinth Game Using Data-Augmented Model-Based Reinforcement Learning
    Bi, Thomas
    D'Andrea, Raffaello
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 7455 - 7460
  • [44] Vision-Language Recommendation via Attribute Augmented Multimodal Reinforcement Learning
    Yu, Tong
    Shen, Yilin
    Zhang, Ruiyi
    Zeng, Xiangyu
    Jin, Hongxia
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 39 - 47
  • [45] CDARL: a contrastive discriminator-augmented reinforcement learning framework for sequential recommendations
    Zhuang Liu
    Yunpu Ma
    Marcel Hildebrandt
    Yuanxin Ouyang
    Zhang Xiong
    Knowledge and Information Systems, 2022, 64 : 2239 - 2265
  • [46] Imagination-Augmented Reinforcement Learning Framework for Variable Speed Limit Control
    Li, Duo
    Lasenby, Joan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (02) : 1384 - 1393
  • [47] Decision Making for Autonomous Driving via Augmented Adversarial Inverse Reinforcement Learning
    Wang, Pin
    Liu, Dapeng
    Chen, Jiayu
    Li, Hanhan
    Chan, Ching-Yao
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 1036 - 1042
  • [48] Learning a Diagnostic Strategy on Medical Data With Deep Reinforcement Learning
    Zhu, Mengxiao
    Zhu, Haogang
    IEEE ACCESS, 2021, 9 : 84122 - 84133
  • [49] Edge intelligence computing for mobile augmented reality with deep reinforcement learning approach
    Chen, Miaojiang
    Liu, Wei
    Wang, Tian
    Liu, Anfeng
    Zeng, Zhiwen
    Computer Networks, 2021, 195
  • [50] Augmented Memory: Sample-Efficient Generative Molecular Design with Reinforcement Learning
    Guo, Jeff
    Schwaller, Philippe
    JACS AU, 2024, 4 (06): : 2160 - 2172