Rogue-Gym: A New Challenge for Generalization in Reinforcement Learning

被引:1
|
作者
Kanagawa, Yuji [1 ]
Kaneko, Tomoyuki [2 ]
机构
[1] Univ Tokyo, Grad Sch Arts & Sci, Tokyo, Japan
[2] Univ Tokyo, Interfac Initiat Informat Studies, Tokyo, Japan
关键词
roguelike games; reinforcement learning; generalization; domain adaptation; neural networks; ENVIRONMENT;
D O I
10.1109/cig.2019.8848075
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose Rogue-Gym, a simple and classic style roguelike game built for evaluating generalization in reinforcement learning (RL). Combined with the recent progress of deep neural networks, RL has successfully trained human-level agents without human knowledge in many games such as those for Atari 2600. However, it has been pointed out that agents trained with RL methods often overfit the training environment, and they work poorly in slightly different environments. To investigate this problem, some research environments with procedural content generation have been proposed. Following these studies, we propose the use of roguelikes as a benchmark for evaluating the generalization ability of RL agents. In our Rogue-Gym, agents need to explore dungeons that are structured differently each time they start a new game. Thanks to the very diverse structures of the dungeons, we believe that the generalization benchmark of Rogue-Gym is sufficiently fair. In our experiments, we evaluate a standard reinforcement learning method, PPO, with and without enhancements for generalization. The results show that some enhancements believed to be effective fail to mitigate the overfitting in Rogue-Gym, although others slightly improve the generalization ability.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Enhancing Visual Generalization in Reinforcement Learning with Cycling Augmentation
    Sun, Shengjie
    Lyu, Jiafei
    Li, Lu
    Guo, Jiazhe
    Yan, Mengbei
    Liu, Runze
    Li, Xiu
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT IV, 2024, 15019 : 397 - 411
  • [42] LevDoom: A Benchmark for Generalization on Level Difficulty in Reinforcement Learning
    Tomilin, Tristan
    Dai, Tianhong
    Fang, Meng
    Pechenizkiy, Mykola
    2022 IEEE CONFERENCE ON GAMES, COG, 2022, : 72 - 79
  • [43] Clustering subspace generalization to obtain faster reinforcement learning
    Maryam Hashemzadeh
    Reshad Hosseini
    Majid Nili Ahmadabadi
    Evolving Systems, 2020, 11 : 89 - 103
  • [44] Mix-Spectrum for Generalization in Visual Reinforcement Learning
    Lee, Jeong Woon
    Hwang, Hyoseok
    IEEE ACCESS, 2025, 13 : 7939 - 7950
  • [45] Clustering subspace generalization to obtain faster reinforcement learning
    Hashemzadeh, Maryam
    Hosseini, Reshad
    Ahmadabadi, Majid Nili
    EVOLVING SYSTEMS, 2020, 11 (01) : 89 - 103
  • [46] Adversarial Discriminative Feature Separation for Generalization in Reinforcement Learning
    Liu, Yong
    Wu, Chunwei
    Xi, Xidong
    Li, Yan
    Cao, Guitao
    Cao, Wenming
    Wang, Hong
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [47] Scaling, Control and Generalization in Reinforcement Learning Level Generators
    Earle, Sam
    Jiang, Zehua
    Togelius, Julian
    2024 IEEE CONFERENCE ON GAMES, COG 2024, 2024,
  • [48] Grounding Language to Entities and Dynamics for Generalization in Reinforcement Learning
    Hanjie, Austin W.
    Zhong, Victor
    Narasimhan, Karthik
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [49] Multi-Agent Reinforcement Learning for Multiple Rogue Drone Interception
    Valianti, Panayiota
    Malialis, Kleanthis
    Kolios, Panayiotis
    Ellinas, Georgios
    2023 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS, ICUAS, 2023, : 1037 - 1044
  • [50] BaziGooshi: A Hybrid Model of Reinforcement Learning for Generalization in Gameplay
    Karimi, Sara
    Asadi, Sahar
    Payberah, Amir H.
    IEEE TRANSACTIONS ON GAMES, 2024, 16 (03) : 722 - 734