Developing and Testing a New Reinforcement Learning Toolkit with Unreal Engine

被引:2
|
作者
Sapio, Francesco [1 ]
Ratini, Riccardo [1 ]
机构
[1] Sapienza Univ Rome, Rome, Italy
来源
关键词
Evaluation methods and techniques; Unreal Engine; Reinforcement Learning;
D O I
10.1007/978-3-031-05643-7_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work we tried to overcome the main limitations that can be found in current state-of-the-art development and benchmarking RL platforms, namely the lack of a user interface, a closed-box approach to scenarios, the lack of realistic environments and the difficulty in extending the obtained results in real applications, by introducing a new development framework for reinforcement learning built over the graphics engine Unreal Engine 4. Unreal Reinforcement Learning Toolkit (URLT) was developed with the idea of being modular, flexible, and easy to use even for non-expert users. To do this, we have developed flexible and modular APIs, through which it's possible to setup the major learning techniques. Using these APIs, users can define all the elements of a RL problem, such as agents, algorithms, tasks, and scenarios, and to combine them with each other to always have new solutions. By taking advantage of the editor's UI, users can select and execute existing scenarios and change the parameters of agents and tasks without to recompile the code. Users also have the possibility to create new scenarios from scratch using an intuitive level editor. Furthermore, task design is made accessible to non-expert users using a node-oriented visual programming system called Blueprint. To validate the tool, we produced a starter pack containing a suite of state-of-the-art RL algorithms, some example scenarios, a small library of props and a couple of trainable agents. Moreover, we ran an evaluation test with users in which the latter were required to try URLT and competing software (OpenAI Gym), then to evaluate both using a questionnaire. The obtained results showed a general preference for URLT in all key parameters of the test.
引用
收藏
页码:317 / 334
页数:18
相关论文
共 50 条
  • [31] Focus on New Test Cases in Continuous Integration Testing based on Reinforcement Learning
    Chen, Fanliang
    Li, Zheng
    Shang, Ying
    Yang, Yang
    2022 IEEE 22ND INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2022, : 830 - 841
  • [32] ONLINE EXAM - A NEW STEP IN DEVELOPING THE E-LEARNING TESTING
    Jecan, Sergiu
    Benta, Dan
    QUALITY MANAGEMENT IN HIGHER EDUCATION, VOL 2, 2010, : 495 - 498
  • [33] Deep reinforcement learning for quantum Szilard engine optimization
    Sordal, Vegard B.
    Bergli, Joakim
    PHYSICAL REVIEW A, 2019, 100 (04)
  • [34] Nonlinear Control of a Gas Turbine Engine with Reinforcement Learning
    Singh, Richa
    Nataraj, P. S., V
    Maity, Arnab
    PROCEEDINGS OF THE FUTURE TECHNOLOGIES CONFERENCE (FTC) 2021, VOL 2, 2022, 359 : 105 - 120
  • [35] Developing PFC representations using reinforcement learning
    Reynolds, Jeremy R.
    O'Reilly, Randall C.
    COGNITION, 2009, 113 (03) : 281 - 292
  • [36] Developing collaborative golog agents by reinforcement learning
    Letia, IA
    Precup, D
    ICTAI 2001: 13TH IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2001, : 195 - 202
  • [37] Developing an Investment Method for Securities With Reinforcement Learning
    Song, Weiwei
    Xiong, Zheli
    Yue, Lei
    IEEE ACCESS, 2024, 12 : 162451 - 162464
  • [38] Developing a framework for learning objects search engine
    Fiaidhi, J
    Passi, K
    Mohammed, S
    IC'04: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON INTERNET COMPUTING, VOLS 1 AND 2, 2004, : 610 - 616
  • [39] Developing new tests for engine oils
    Whitby, R. David
    TRIBOLOGY & LUBRICATION TECHNOLOGY, 2015, 71 (05) : 96 - 96
  • [40] Developing Train Station Parking Algorithms: New Frameworks Based on Fuzzy Reinforcement Learning
    Li, Wei
    Xian, Kai
    Yin, Jiateng
    Chen, Dewang
    Journal of Advanced Transportation, 2019, 2019