Efficient model-based reinforcement learning for approximate online optimal control

被引:61
|
作者
Kamalapurkar, Rushikesh [1 ]
Rosenfeld, Joel A. [2 ]
Dixon, Warren E. [2 ]
机构
[1] Oklahoma State Univ, Sch Mech & Aerosp Engn, Stillwater, OK 74078 USA
[2] Univ Florida, Dept Mech & Aerosp Engn, Gainesville, FL USA
基金
美国国家科学基金会;
关键词
Model-based reinforcement learning; Data-based control; Adaptive control; Local approximation; DISCRETE-TIME-SYSTEMS; ADAPTIVE OPTIMAL-CONTROL; NONLINEAR-SYSTEMS; NETWORK; DYNAMICS;
D O I
10.1016/j.automatica.2016.08.004
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
An infinite horizon optimal regulation problem is solved online for a deterministic control-affine nonlinear dynamical system using a state following (StaF) kernel method to approximate the value function. Unlike traditional methods that aim to approximate a function over a large compact set, the StaF kernel method aims to approximate a function in a small neighborhood of a state that travels within a compact set. Simulation results demonstrate that stability and approximate optimality of the control system can be achieved with significantly fewer basis functions than may be required for global approximation methods. (C) 2016 Elsevier Ltd. All rights reserved.
引用
收藏
页码:247 / 258
页数:12
相关论文
共 50 条
  • [31] Approximate Optimal Stabilization Control of Servo Mechanisms based on Reinforcement Learning Scheme
    Lv, Yongfeng
    Ren, Xuemei
    Hu, Shuangyi
    Xu, Hao
    [J]. INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2019, 17 (10) : 2655 - 2665
  • [32] Approximate Optimal Stabilization Control of Servo Mechanisms based on Reinforcement Learning Scheme
    Yongfeng Lv
    Xuemei Ren
    Shuangyi Hu
    Hao Xu
    [J]. International Journal of Control, Automation and Systems, 2019, 17 : 2655 - 2665
  • [33] Cognitive Control Predicts Use of Model-based Reinforcement Learning
    Otto, A. Ross
    Skatova, Anya
    Madlon-Kay, Seth
    Daw, Nathaniel D.
    [J]. JOURNAL OF COGNITIVE NEUROSCIENCE, 2015, 27 (02) : 319 - 333
  • [34] Model-based hierarchical reinforcement learning and human action control
    Botvinick, Matthew
    Weinstein, Ari
    [J]. PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2014, 369 (1655)
  • [35] Model-based Reinforcement Learning for Continuous Control with Posterior Sampling
    Fan, Ying
    Ming, Yifei
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [36] Adaptive optics control using model-based reinforcement learning
    Nousiainen, Jalo
    Rajani, Chang
    Kasper, Markus
    Helin, Tapio
    [J]. OPTICS EXPRESS, 2021, 29 (10): : 15327 - 15344
  • [37] Advances in model-based reinforcement learning for Adaptive Optics control
    Nousiainen, Jalo
    Engler, Byron
    Kasper, Markus
    Helin, Tapio
    Heritier, Cedric T.
    Rajani, Chang
    [J]. ADAPTIVE OPTICS SYSTEMS VIII, 2022, 12185
  • [38] Efficient Neural Network Pruning Using Model-Based Reinforcement Learning
    Bencsik, Blanka
    Szemenyei, Marton
    [J]. 2022 INTERNATIONAL SYMPOSIUM ON MEASUREMENT AND CONTROL IN ROBOTICS (ISMCR), 2022, : 130 - 137
  • [39] Efficient state synchronisation in model-based testing through reinforcement learning
    Turker, Uraz Cengiz
    Hierons, Robert M.
    Mousavi, Mohammad Reza
    Tyukin, Ivan Y.
    [J]. 2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 368 - 380
  • [40] Efficient Exploration in Continuous-time Model-based Reinforcement Learning
    Treven, Lenart
    Hubotter, Jonas
    Sukhija, Bhavya
    Dorfler, Florian
    Krause, Andreas
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,