Knowledge Transfer using Model-Based Deep Reinforcement Learning

被引:1
|
作者
Boloka, Tlou [1 ]
Makondo, Ndivhuwo [2 ]
Rosman, Benjamin [3 ]
机构
[1] CSIR, Ind Robot, Pretoria, South Africa
[2] Univ Witwatersrand, Comp Sci & Appl Math, IBM Res Africa, Johannesburg, South Africa
[3] Univ Witwatersrand, Comp Sci & Appl Math, Johannesburg, South Africa
关键词
D O I
10.1109/SAUPEC/RobMech/PRASA52254.2021.9377247
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning has recently been adopted for robot behavior learning, where robot skills are acquired and adapted from data generated by the robot while interacting with its environment through a trial-and-error process. Despite this success, most model-free deep reinforcement learning algorithms learn a task-specific policy from a clean slate and thus suffer from high sample complexity (i.e., they require a significant amount of interaction with the environment to learn reasonable policies and even more to reach convergence). They also suffer from poor initial performance due to executing a randomly initialized policy in the early stages of learning to obtain experience used to train a policy or value function. Model-based deep reinforcement learning mitigates these shortcomings. However, it suffers from poor asymptotic performance in contrast to a model-free approach. In this work, we investigate knowledge transfer from a model-based teacher to a task-specific model-free learner to alleviate executing a randomly initialized policy in the early stages of learning. Our experiments show that this approach results in better asymptotic performance, enhanced initial performance, improved safety, better action effectiveness, and reduced sample complexity.
引用
收藏
页数:6
相关论文
共 50 条
  • [42] Advancing Process Control in Fluidized Bed Biomass Gasification Using Model-Based Deep Reinforcement Learning
    Faridi, Ibtihaj Khurram
    Tsotsas, Evangelos
    Kharaghani, Abdolreza
    [J]. PROCESSES, 2024, 12 (02)
  • [43] Design Optimization of a Pneumatic Soft Robotic Actuator Using Model-Based Optimization and Deep Reinforcement Learning
    Raeisinezhad, Mahsa
    Pagliocca, Nicholas
    Koohbor, Behrad
    Trkov, Mitja
    [J]. FRONTIERS IN ROBOTICS AND AI, 2021, 8
  • [44] Learning safety in model-based Reinforcement Learning using MPC and Gaussian Processes
    Airaldi, Filippo
    De Schutter, Bart
    Dabiri, Azita
    [J]. IFAC PAPERSONLINE, 2023, 56 (02): : 5759 - 5764
  • [45] Pre-training with Augmentations for Efficient Transfer in Model-Based Reinforcement Learning
    Esteves, Bernardo
    Vasco, Miguel
    Melo, Francisco S.
    [J]. PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT I, 2023, 14115 : 133 - 145
  • [46] Knowledge-Aided Model-Based Reinforcement Learning for Anti-Jamming Strategy Learning
    Li, Kang
    Liu, Hongwei
    Jiu, Bo
    Pu, Wenqiang
    Peng, Xiaojun
    Yan, Junkun
    [J]. IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2024, 60 (03) : 2976 - 2994
  • [47] Learning to see via epiretinal implant stimulation in silico with model-based deep reinforcement learning
    Lavoie, Jacob
    Besrour, Marwan
    Lemaire, William
    Rouat, Jean
    Fontaine, Rejean
    Plourde, Eric
    [J]. BIOMEDICAL PHYSICS & ENGINEERING EXPRESS, 2024, 10 (02)
  • [48] On Effective Scheduling of Model-based Reinforcement Learning
    Lai, Hang
    Shen, Jian
    Zhang, Weinan
    Huang, Yimin
    Zhang, Xing
    Tang, Ruiming
    Yu, Yong
    Li, Zhenguo
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [49] Model-based reinforcement learning with dimension reduction
    Tangkaratt, Voot
    Morimoto, Jun
    Sugiyama, Masashi
    [J]. NEURAL NETWORKS, 2016, 84 : 1 - 16
  • [50] Objective Mismatch in Model-based Reinforcement Learning
    Lambert, Nathan
    Amos, Brandon
    Yadan, Omry
    Calandra, Roberto
    [J]. LEARNING FOR DYNAMICS AND CONTROL, VOL 120, 2020, 120 : 761 - 770