Pre-training with Augmentations for Efficient Transfer in Model-Based Reinforcement Learning

被引:0
|
作者
Esteves, Bernardo [1 ,2 ]
Vasco, Miguel [1 ,2 ]
Melo, Francisco S. [1 ,2 ]
机构
[1] INESC ID, Lisbon, Portugal
[2] Univ Lisbon, Inst Super Tecn, Lisbon, Portugal
关键词
Reinforcement learning; Transfer learning; Representation learning;
D O I
10.1007/978-3-031-49008-8_11
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work explores pre-training as a strategy to allow reinforcement learning (RL) algorithms to efficiently adapt to new (albeit similar) tasks. We argue for introducing variability during the pre-training phase, in the form of augmentations to the observations of the agent, to improve the sample efficiency of the fine-tuning stage. We categorize such variability in the form of perceptual, dynamic and semantic augmentations, which can be easily employed in standard pre-training methods. We perform extensive evaluations of our proposed augmentation scheme in model-based algorithms, across multiple scenarios of increasing complexity. The results consistently show that our augmentation scheme significantly improves the efficiency of the fine-tuning to novel tasks, outperforming other state-of-the-art pre-training approaches.
引用
收藏
页码:133 / 145
页数:13
相关论文
共 50 条
  • [21] Efficient learning for spoken language understanding tasks with word embedding based pre-training
    Luan, Yi
    Watanabe, Shinji
    Harsham, Bret
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 1398 - 1402
  • [22] Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning
    Itahara, Sohei
    Nishio, Takayuki
    Morikura, Masahiro
    Yamamoto, Koji
    2020 IEEE 92ND VEHICULAR TECHNOLOGY CONFERENCE (VTC2020-FALL), 2020,
  • [23] Pre-Training Acquisition Functions by Deep Reinforcement Learning for Fixed Budget Active Learning
    Yusuke Taguchi
    Hideitsu Hino
    Keisuke Kameyama
    Neural Processing Letters, 2021, 53 : 1945 - 1962
  • [24] Pre-Training Acquisition Functions by Deep Reinforcement Learning for Fixed Budget Active Learning
    Taguchi, Yusuke
    Hino, Hideitsu
    Kameyama, Keisuke
    NEURAL PROCESSING LETTERS, 2021, 53 (03) : 1945 - 1962
  • [25] Pre-training with non-expert human demonstration for deep reinforcement learning
    De La Cruz, Gabriel V., Jr.
    Du, Yunshu
    Taylor, Matthew E.
    KNOWLEDGE ENGINEERING REVIEW, 2019, 34
  • [26] Sample-efficient model-based reinforcement learning for quantum control
    Khalid, Irtaza
    Weidner, Carrie A.
    Jonckheere, Edmond A.
    Schirmer, Sophie G.
    Langbein, Frank C.
    PHYSICAL REVIEW RESEARCH, 2023, 5 (04):
  • [27] Efficient Neural Network Pruning Using Model-Based Reinforcement Learning
    Bencsik, Blanka
    Szemenyei, Marton
    2022 INTERNATIONAL SYMPOSIUM ON MEASUREMENT AND CONTROL IN ROBOTICS (ISMCR), 2022, : 130 - 137
  • [28] Efficient state synchronisation in model-based testing through reinforcement learning
    Turker, Uraz Cengiz
    Hierons, Robert M.
    Mousavi, Mohammad Reza
    Tyukin, Ivan Y.
    2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 368 - 380
  • [29] Efficient Model-Based Deep Reinforcement Learning with Variational State Tabulation
    Corneil, Dane
    Gerstner, Wulfram
    Brea, Johanni
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [30] Efficient Exploration in Continuous-time Model-based Reinforcement Learning
    Treven, Lenart
    Hubotter, Jonas
    Sukhija, Bhavya
    Dorfler, Florian
    Krause, Andreas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,