Basic Research on Transfer Learning Indicators for Reinforcement Learning

被引:0
|
作者
Sugikawa, Satoshi [1 ]
Takeoka, Kenta [1 ]
Kotani, Naoki [1 ]
机构
[1] Osaka Inst Technol, 1-79-1 Kitayama, Hirakata, Osaka 5730196, Japan
来源
JOURNAL OF ROBOTICS NETWORKING AND ARTIFICIAL LIFE | 2023年 / 10卷 / 03期
关键词
Reinforcement learning; Transfer learning; Maze problems;
D O I
暂无
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Reinforcement learning requires a lot of time for the agent to learn. Transfer learning methods can be used to shorten this learning time, but they have the disadvantage that it is not known which knowledge is effective in what kind of environment until it is learned. When users transfer knowledge, it is necessary to investigate the relationship between the transfer source and the transfer destination. This study proposes an adaptive criteria evaluation index that can determine this relationship in advance. In the simulation, we confirmed the effectiveness of the proposed method using several problem examples. (c) 2022The Author. Published by Sugisaka Masanori at ALife Robotics Corporation Ltd.This is an open access article distributed under the CC BY-NC 4.0 license
引用
收藏
页码:261 / 265
页数:5
相关论文
共 50 条
  • [21] An Introduction to Intertask Transfer for Reinforcement Learning
    Taylor, Matthew E.
    Stone, Peter
    AI MAGAZINE, 2011, 32 (01) : 15 - 34
  • [22] Compositional Transfer in Hierarchical Reinforcement Learning
    Wulfmeier, Markus
    Abdolmaleki, Abbas
    Hafner, Roland
    Springenberg, Jost Tobias
    Neunert, Michael
    Hertweck, Tim
    Lampe, Thomas
    Siegel, Noah
    Heess, Nicolas
    Riedmiller, Martin
    ROBOTICS: SCIENCE AND SYSTEMS XVI, 2020,
  • [23] Transfer of reinforcement learning: the state of the art
    State Key Laboratory of Novel Software Technology, Nanjing University, Nanjing 210093, China
    Tien Tzu Hsueh Pao, 2008, SUPPL. (39-43):
  • [24] Automated Transfer for Reinforcement Learning Tasks
    Ammar, Haitham Bou
    Chen, Siqi
    Tuyls, Karl
    Weiss, Gerhard
    KUNSTLICHE INTELLIGENZ, 2014, 28 (01): : 7 - 14
  • [25] Disentangling Transfer in Continual Reinforcement Learning
    Wolczyk, Maciej
    Zajac, Michal
    Pascanu, Razvan
    Kucinski, Lukasz
    Milos, Piotr
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [26] Successor Features for Transfer in Reinforcement Learning
    Barreto, Andre
    Dabney, Will
    Munos, Remi
    Hunt, Jonathan J.
    Schaul, Tom
    van Hasselt, Hado
    Silver, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [27] Relational macros for transfer in reinforcement learning
    Torrey, Lisa
    Shavlik, Jude
    Walker, Trevor
    Maclin, Richard
    INDUCTIVE LOGIC PROGRAMMING, 2008, 4894 : 254 - +
  • [28] Feature transfer learning by reinforcement learning for detecting software defect
    Guo, Shikai
    Wang, Jiahui
    Xu, Zhihao
    Huang, Lin
    Li, Hui
    Chen, Rong
    SOFTWARE-PRACTICE & EXPERIENCE, 2023, 53 (02): : 366 - 389
  • [29] Missile aerodynamic design using reinforcement learning and transfer learning
    Xinghui YAN
    Jihong ZHU
    Minchi KUANG
    Xiangyang WANG
    Science China(Information Sciences), 2018, 61 (11) : 253 - 255
  • [30] Aggregation Transfer Learning for Multi-Agent Reinforcement learning
    Xu, Dongsheng
    Qiao, Peng
    Dou, Yong
    2021 2ND INTERNATIONAL CONFERENCE ON BIG DATA & ARTIFICIAL INTELLIGENCE & SOFTWARE ENGINEERING (ICBASE 2021), 2021, : 547 - 551