Selective Data Augmentation for Improving the Performance of Offline Reinforcement Learning

被引:0
|
作者
Han, Jungwoo [1 ]
Kim, Jinwhan [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Dept Mech Engn, Daejeon 34141, South Korea
关键词
Offline Reinforcement Learning; Data Augmentation; Variational Auto Encoder;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This study proposes a new data augmentation technique for offline reinforcement learning (RL). Rather than randomly choosing data points to carry out the data augmentation, our methodology selectively chooses data from sparse subspaces of the dataset to effectively augment the data region that is insufficient in the original dataset. For the augmentation, the subspaces of the dataset would be represented in the latent space created by the variational autoencoder (VAE). Data is then sampled from the latent space and converted back to the original space by using the decoder of the VAE so that the augmented data can be added to the original dataset. By using the VAE, virtual data that does not severely deviate from the original data could be generated because the VAE creates new data points by using the latent space that captures the original data distribution. We evaluate the performance of our methodology using several offline RL datasets generated from OpenAI Gym benchmark control simulations which mainly use state-based inputs.
引用
收藏
页码:222 / 226
页数:5
相关论文
共 50 条
  • [1] Boundary Data Augmentation for Offline Reinforcement Learning
    SHEN Jiahao
    JIANG Ke
    TAN Xiaoyang
    [J]. ZTE Communications, 2023, 21 (03) : 29 - 36
  • [2] Uncertainty-Aware Data Augmentation for Offline Reinforcement Learning
    Su, Yunjie
    Kong, Yilun
    Wang, Xueqian
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [3] A Swapping Target Q-Value Technique for Data Augmentation in Offline Reinforcement Learning
    Joo, Ho-Taek
    Baek, In-Chang
    Kim, Kyung-Joong
    [J]. IEEE ACCESS, 2022, 10 : 57369 - 57382
  • [4] ACAMDA: Improving Data Efficiency in Reinforcement Learning Through Guided Counterfactual Data Augmentation
    Sun, Yuewen
    Wang, Erli
    Huang, Biwei
    Lu, Chaochao
    Feng, Lu
    Sun, Changyin
    Zhang, Kun
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 14, 2024, : 15193 - 15201
  • [5] Robust Reinforcement Learning using Offline Data
    Panaganti, Kishan
    Xu, Zaiyan
    Kalathil, Dileep
    Ghavamzadeh, Mohammad
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Federated Offline Reinforcement Learning With Multimodal Data
    Wen, Jiabao
    Dai, Huiao
    He, Jingyi
    Xi, Meng
    Xiao, Shuai
    Yang, Jiachen
    [J]. IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 4266 - 4276
  • [7] Efficient Online Reinforcement Learning with Offline Data
    Ball, Philip J.
    Smith, Laura
    Kostrikov, Ilya
    Levine, Sergey
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [8] Improving Active Learning Performance through the Use of Data Augmentation
    Fonseca, Joao
    Bacao, Fernando
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2023, 2023
  • [9] K-mixup: Data augmentation for offline reinforcement learning using mixup in a Koopman invariant subspace
    Jang, Junwoo
    Han, Jungwoo
    Kim, Jinwhan
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 225
  • [10] How to Leverage Unlabeled Data in Offline Reinforcement Learning
    Yu, Tianhe
    Kumar, Aviral
    Chebotar, Yevgen
    Hausman, Karol
    Finn, Chelsea
    Levine, Sergey
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,