Procedural Content Generation using Reinforcement Learning and Entropy Measure as Feedback

被引:1
|
作者
Moreira Dutra, Paulo Vinicius [1 ]
Villela, Saulo Moraes [1 ]
Neto, Raul Fonseca [1 ]
机构
[1] Univ Fed Juiz de Fora, Dept Comp Sci, Juiz De Fora, MG, Brazil
关键词
procedural content generation; reinforcement learning; entropy; machine learning;
D O I
10.1109/SBGAMES56371.2022.9961076
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In this work, we investigate how we can approach procedural content generation with reinforcement learning and mixed-initiative design. A second question discussed here is how we can use metrics to evaluate the diversity of the generated level. Our proposal has as its main hypothesis to use scenario models, provided by an expert human level designer specialist, for the reinforcement learning agents in order to generate new scenarios. The levels provided by the specialist are separated into segments or blocks that are used to compose the new scenario structures. Also, a new reward function based on the use of entropy was proposed to measure the diversity of the generated scenarios. Initially, we trained our model for three different 2D Dungeon crawlers game environments. We analyzed our results through the value of the entropy, and it shows that our approach can generate wide levels with a diversity of segments.
引用
收藏
页码:7 / 12
页数:6
相关论文
共 50 条
  • [1] Adversarial Reinforcement Learning for Procedural Content Generation
    Gisslen, Linus
    Eakins, Andy
    Gordillo, Camilo
    Bergdahl, Joakim
    Tollmar, Konrad
    [J]. 2021 IEEE CONFERENCE ON GAMES (COG), 2021, : 9 - 16
  • [2] Procedural Content Generation: Better Benchmarks for Transfer Reinforcement Learning
    Muller-Brockhausen, Matthias
    Preuss, Mike
    Plaat, Aske
    [J]. 2021 IEEE CONFERENCE ON GAMES (COG), 2021, : 924 - 931
  • [3] A mixed-initiative design framework for procedural content generation using reinforcement learning
    Dutra, Paulo Vinicius Moreira
    Villela, Saulo Moraes
    Neto, Raul Fonseca
    [J]. ENTERTAINMENT COMPUTING, 2025, 52
  • [4] Intrinsically Motivated Reinforcement Learning: A Promising Framework for Procedural Content Generation
    Shaker, Noor
    [J]. 2016 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND GAMES (CIG), 2016,
  • [5] Leveraging Procedural Generation to Benchmark Reinforcement Learning
    Cobbe, Karl
    Hesse, Christopher
    Hilton, Jacob
    Schulman, John
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [6] Leveraging Procedural Generation to Benchmark Reinforcement Learning
    Cobbe, Karl
    Hesse, Christopher
    Hilton, Jacob
    Schulman, John
    [J]. 25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [7] Deep Reinforcement Learning for Procedural Content Generation of 3D Virtual Environments
    Lopez, Christian E.
    Cunningham, James
    Ashour, Omar
    Tucker, Conrad S.
    [J]. JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2020, 20 (05)
  • [8] Procedural Content Generation Using Reinforcement Learning for Disaster Evacuation Training in a Virtual 3D Environment
    Agarwal, Jigyasa
    Shridevi, S.
    [J]. IEEE ACCESS, 2023, 11 : 98607 - 98617
  • [9] Deep learning for procedural content generation
    Liu, Jialin
    Snodgrass, Sam
    Khalifa, Ahmed
    Risi, Sebastian
    Yannakakis, Georgios N.
    Togelius, Julian
    [J]. NEURAL COMPUTING & APPLICATIONS, 2021, 33 (01): : 19 - 37
  • [10] Deep learning for procedural content generation
    Jialin Liu
    Sam Snodgrass
    Ahmed Khalifa
    Sebastian Risi
    Georgios N. Yannakakis
    Julian Togelius
    [J]. Neural Computing and Applications, 2021, 33 : 19 - 37