Reinforcement Learning with Augmented Data

被引:0
|
作者
Laskin, Michael [1 ]
Lee, Kimin [1 ]
Stooke, Adam [1 ]
Pinto, Lerrel [2 ]
Abbeel, Pieter [1 ]
Srinivas, Aravind [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] New York Univ, New York, NY USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020 | 2020年 / 33卷
关键词
LEVEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from visual observations is a fundamental yet challenging problem in Reinforcement Learning (RL). Although algorithmic advances combined with convolutional neural networks have proved to be a recipe for success, current methods are still lacking on two fronts: (a) data-efficiency of learning and (b) generalization to new environments. To this end, we present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms. We perform the first extensive study of general data augmentations for RL on both pixel-based and state-based inputs, and introduce two new data augmentations - random translate and random amplitude scale. We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods across common benchmarks. RAD sets a new state-of-the-art in terms of data-efficiency and final performance on the DeepMind Control Suite benchmark for pixel-based control as well as OpenAI Gym benchmark for state-based control. We further demonstrate that RAD significantly improves test-time generalization over existing methods on several OpenAI ProcGen benchmarks. Our RAD module and training code are available at https://www.github.com/MishaLaskin/rad.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Zombies Arena: fusion of reinforcement learning with augmented reality on NPC
    Razzaq, Saad
    Maqbool, Fahad
    Khalid, Maham
    Tariq, Iram
    Zahoor, Aqsa
    Ilyas, Muhammad
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2018, 21 (01): : 655 - 666
  • [22] Demonstration and offset augmented meta reinforcement learning with sparse rewards
    Li, Haorui
    Liang, Jiaqi
    Wang, Xiaoxuan
    Jiang, Chengzhi
    Li, Linjing
    Zeng, Daniel
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (04)
  • [23] Augmented Lagrangian Method for Instantaneously Constrained Reinforcement Learning Problems
    Li, Jingqi
    Fridovich-Keil, David
    Sojoudi, Somayeh
    Tomlin, Claire J.
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2982 - 2989
  • [24] Diversity-augmented intrinsic motivation for deep reinforcement learning
    Dai, Tianhong
    Du, Yali
    Fang, Meng
    Bharath, Anil Anthony
    NEUROCOMPUTING, 2022, 468 : 396 - 406
  • [25] Delivering Resources for Augmented Reality by UAVs: a Reinforcement Learning Approach
    Brunori, Damiano
    Colonnese, Stefania
    Cuomo, Francesca
    Flore, Giovanna
    Iocchi, Luca
    FRONTIERS IN COMMUNICATIONS AND NETWORKS, 2021, 2
  • [26] Augmented Ultrasonic Data for Machine Learning
    Iikka Virkkunen
    Tuomas Koskinen
    Oskari Jessen-Juhler
    Jari Rinta-aho
    Journal of Nondestructive Evaluation, 2021, 40
  • [27] Augmented Ultrasonic Data for Machine Learning
    Virkkunen, Iikka
    Koskinen, Tuomas
    Jessen-Juhler, Oskari
    Rinta-aho, Jari
    JOURNAL OF NONDESTRUCTIVE EVALUATION, 2021, 40 (01)
  • [28] DRL-Tomo: a deep reinforcement learning-based approach to augmented data generation for network tomography
    Hou, Changsheng
    Hou, Bingnan
    Li, Xionglve
    Zhou, Tongqing
    Chen, Yingwen
    Cai, Zhiping
    COMPUTER JOURNAL, 2024, 67 (10): : 2995 - 3008
  • [29] Data Augmented Incremental Learning (DAIL) for Unsupervised Data
    Madhusudhanan, Sathya
    Jaganathan, Suresh
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (06) : 1185 - 1195
  • [30] Concept learning through deep reinforcement learning with memory-augmented neural networks
    Shi, Jing
    Xu, Jiaming
    Yao, Yiqun
    Xu, Bo
    NEURAL NETWORKS, 2019, 110 : 47 - 54