Developing and Testing a New Reinforcement Learning Toolkit with Unreal Engine

被引:2
|
作者
Sapio, Francesco [1 ]
Ratini, Riccardo [1 ]
机构
[1] Sapienza Univ Rome, Rome, Italy
来源
关键词
Evaluation methods and techniques; Unreal Engine; Reinforcement Learning;
D O I
10.1007/978-3-031-05643-7_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work we tried to overcome the main limitations that can be found in current state-of-the-art development and benchmarking RL platforms, namely the lack of a user interface, a closed-box approach to scenarios, the lack of realistic environments and the difficulty in extending the obtained results in real applications, by introducing a new development framework for reinforcement learning built over the graphics engine Unreal Engine 4. Unreal Reinforcement Learning Toolkit (URLT) was developed with the idea of being modular, flexible, and easy to use even for non-expert users. To do this, we have developed flexible and modular APIs, through which it's possible to setup the major learning techniques. Using these APIs, users can define all the elements of a RL problem, such as agents, algorithms, tasks, and scenarios, and to combine them with each other to always have new solutions. By taking advantage of the editor's UI, users can select and execute existing scenarios and change the parameters of agents and tasks without to recompile the code. Users also have the possibility to create new scenarios from scratch using an intuitive level editor. Furthermore, task design is made accessible to non-expert users using a node-oriented visual programming system called Blueprint. To validate the tool, we produced a starter pack containing a suite of state-of-the-art RL algorithms, some example scenarios, a small library of props and a couple of trainable agents. Moreover, we ran an evaluation test with users in which the latter were required to try URLT and competing software (OpenAI Gym), then to evaluate both using a questionnaire. The obtained results showed a general preference for URLT in all key parameters of the test.
引用
收藏
页码:317 / 334
页数:18
相关论文
共 50 条
  • [1] Reinforcement Learning for All: An Implementation using Unreal Engine Blueprint
    Boyd, Reece A.
    Barbosa, Salvador E.
    PROCEEDINGS 2017 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI), 2017, : 787 - 792
  • [2] Game Based Learning Using Unreal Engine
    Obidah, Ruth
    Bein, Doina
    16TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY-NEW GENERATIONS (ITNG 2019), 2019, 800 : 513 - 519
  • [3] SoMoGym: A Toolkit for Developing and Evaluating Controllers and Reinforcement Learning Algorithms for Soft Robots
    Graule, Moritz A.
    McCarthy, Thomas P.
    Teeple, Clark B.
    Werfel, Justin
    Wood, Robert J.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 4071 - 4078
  • [4] Developing Embodied Conversational Agents in the Unreal Engine: The FANTASIA Plugin
    Origlia, Antonio
    Di Bratto, Martina
    Di Maro, Maria
    Mennella, Sabrina
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6950 - 6951
  • [5] Developing a Videogame using Unreal Engine based on a Four Stages Methodology
    Torres-Ferreyros, Carlos Mauriccio
    Festini-Wendorff, Matthew Alexander
    Shiguihara-Juarez, Pedro Nelson
    PROCEEDINGS OF THE 2016 IEEE ANDESCON, 2016,
  • [6] Drone Navigation in Unreal Engine Using Generative Adversarial Imitation Learning
    Bandela, Suraj
    Cao, Yongcan
    AIAA SCITECH 2023 FORUM, 2023,
  • [7] Developing and testing the health literacy universal precautions toolkit
    DeWalt, Darren A.
    Broucksou, Kimberly A.
    Hawk, Victoria
    Brach, Cindy
    Hink, Ashley
    Rudd, Rima
    Callahan, Leigh
    NURSING OUTLOOK, 2011, 59 (02) : 85 - 94
  • [9] New Game Artificial Intelligence Tools for Virtual Mine on Unreal Engine
    Abu-Abed, Fares
    Zhironkin, Sergey
    APPLIED SCIENCES-BASEL, 2023, 13 (10):
  • [10] Using reinforcement learning for engine control
    Schoknecht, R
    Riedmiller, M
    NINTH INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS (ICANN99), VOLS 1 AND 2, 1999, (470): : 329 - 334