Effects of Input Addition in Learning for Adaptive Games: Towards Learning with Structural Changes

被引:1
|
作者
Bonnici, Iago [1 ]
Gouaich, Abdelkader [1 ]
Michel, Fabien [1 ]
机构
[1] Univ Montpellier, CNRS, LIRMM, Montpellier, France
关键词
Adaptive games; Reinforcement Learning; Transfer Learning; Recurrent networks;
D O I
10.1007/978-3-030-16692-2_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adaptive Games (AG) involve a controller agent that continuously feeds from player actions and game state to tweak a set of game parameters in order to maintain or achieve an objective function such as the flow measure defined by Csikszentmihalyi. This can be considered a Reinforcement Learning (RL) situation, so that classical Machine Learning (ML) approaches can be used. On the other hand, many games naturally exhibit an incremental gameplay where new actions and elements are introduced or removed progressively to enhance player's learning curve or to introduce variety within the game. This makes the RL situation unusual because the controller agent input/output signature can change over the course of learning. In this paper, we get interested in this unusual "protean" learning situation (PL). In particular, we assess how the learner can rely on its past shapes and experience to keep improving among signature changes without needing to restart the learning from scratch on each change. We first develop a rigorous formalization of the PL problem. Then, we address the first elementary signature change: "input addition" , with Recurrent Neural Networks (RNNs) in an idealized PL situation. As a first result, we find that it is possible to benefit from prior learning in RNNs even if the past controller agent signature has less inputs. The use of PL in AG thus remains encouraged. Investigating output addition, input/output removal and translating these results to generic PL will be part of future works.
引用
收藏
页码:172 / 184
页数:13
相关论文
共 50 条
  • [1] Input addition and deletion in reinforcement: towards protean learning
    Bonnici, Iago
    Gouaich, Abdelkader
    Michel, Fabien
    [J]. AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2022, 36 (01)
  • [2] Input addition and deletion in reinforcement: towards protean learning
    Iago Bonnici
    Abdelkader Gouaïch
    Fabien Michel
    [J]. Autonomous Agents and Multi-Agent Systems, 2022, 36
  • [3] Adaptive learning in weighted network games
    Bayer, Peter
    Herings, P. Jean-Jacques
    Peeters, Ronald
    Thuijsman, Frank
    [J]. JOURNAL OF ECONOMIC DYNAMICS & CONTROL, 2019, 105 : 250 - 264
  • [4] Learning & Retention in Adaptive Serious Games
    Bergeron, Bryan P.
    [J]. MEDICINE MEETS VIRTUAL REALITY 16: PARALLEL, COMBINATORIAL, CONVERGENT: NEXTMED BY DESIGN, 2008, 132 : 26 - 30
  • [5] Adaptive Learning in Imperfect Monitoring Games
    Gilli, Mario
    [J]. REVIEW OF ECONOMIC DYNAMICS, 1999, 2 (02) : 472 - 485
  • [6] Input perturbations for adaptive control and learning
    Faradonbeh, Mohamad Kazem Shirani
    Tewari, Ambuj
    Michailidis, George
    [J]. AUTOMATICA, 2020, 117
  • [7] Towards adaptive learning designs
    Berlanga, A
    García, FJ
    [J]. ADAPTIVE HYPERMEDIA AND ADAPTIVE WEB-BASED SYSTEMS, PROCEEDINGS, 2004, 3137 : 372 - 375
  • [8] Learning models for the integration of adaptive educational games in virtual learning environments
    Torrente, Javier
    Moreno-Ger, Pablo
    Fernandez-Manjon, Baltasar
    [J]. TECHNOLOGIES FOR E-LEARNING AND DIGITAL ENTERTAINMENT, PROCEEDINGS, 2008, 5093 : 463 - 474
  • [9] Contributions of Serious Games on Adaptive Learning Systems
    El-Ghouli, Lotfi
    Khoukhi, Faddoul
    [J]. 2016 11TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS: THEORIES AND APPLICATIONS (SITA), 2016,
  • [10] Adaptive learning and emergent coordination in minority games
    Bottazzi, G
    Devetag, G
    Dosi, G
    [J]. SIMULATION MODELLING PRACTICE AND THEORY, 2002, 10 (5-7) : 321 - 347