Structured Online Learning-based Control of Continuous-time Nonlinear Systems

被引:4
|
作者
Farsi, Milad [1 ]
Liu, Jun [1 ]
机构
[1] Univ Waterloo, Appl Math Dept, Waterloo, ON, Canada
来源
IFAC PAPERSONLINE | 2020年 / 53卷 / 02期
基金
加拿大自然科学与工程研究理事会;
关键词
Reinforcement learning; Model-based learning; Optimal control; Feedback control; Continuous-time control; Adaptive dynamic programming; Sparse identification;
D O I
10.1016/j.ifacol.2020.12.2299
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Model-based reinforcement learning techniques accelerate the learning task by employing a transition model to make predictions. In this paper, a model-based learning approach is presented that iteratively computes the optimal value function based on the most recent update of the model. Assuming a structured continuous-time model of the system in terms of a set of bases, we formulate an infinite horizon optimal control problem addressing a given control objective. The structure of the system along with a value function parameterized in the quadratic form provides a flexibility in analytically calculating an update rule for the parameters. Hence, a matrix differential equation of the parameters is obtained, where the solution is used to characterize the optimal feedback control in terms of the bases, at any time step. Moreover, the quadratic form of the value function suggests a compact way of updating the parameters that considerably decreases the computational complexity. Considering the state-dependency of the differential equation, we exploit the obtained framework as an online learning-based algorithm. In the numerical results, the presented algorithm is implemented on four nonlinear benchmark examples, where the regulation problem is successfully solved while an identified model of the system is obtained with a bounded prediction error. Copyright (C) 2020 The Authors.
引用
收藏
页码:8142 / 8149
页数:8
相关论文
共 50 条
  • [41] Optimal Control of Affine Nonlinear Continuous-time Systems Using Online Actor-Critic Algorithm
    Chen Xue-song
    Yang Ming-sheng
    Liu Fu-chun
    2013 32ND CHINESE CONTROL CONFERENCE (CCC), 2013, : 2891 - 2894
  • [42] Learning-based Adaptive Control for Nonlinear Systems
    Benosman, Mouhacine
    2014 EUROPEAN CONTROL CONFERENCE (ECC), 2014, : 920 - 925
  • [43] Reinforcement Learning-Based Near Optimization for Continuous-Time Markov Jump Singularly Perturbed Systems
    Wang, Jing
    Peng, Chuanjun
    Park, Ju H.
    Shen, Hao
    Shi, Kaibo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (06) : 2026 - 2030
  • [44] Repetitive learning control of nonlinear continuous-time systems using quasi-sliding mode
    Li, Xiao-Dong
    Chow, Tommy W. S.
    Ho, John K. L.
    Tan, Hong-Zhou
    IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, 2007, 15 (02) : 369 - 374
  • [45] Self-learning robust optimal control for continuous-time nonlinear systems with mismatched disturbances
    Yang, Xiong
    He, Haibo
    NEURAL NETWORKS, 2018, 99 : 19 - 30
  • [46] Adaptive Optimal Control Algorithm for Continuous-Time Nonlinear Systems Based on Policy Iteration
    Vrabie, D.
    Lewis, F. L.
    47TH IEEE CONFERENCE ON DECISION AND CONTROL, 2008 (CDC 2008), 2008, : 73 - 79
  • [47] An algorithm of optimal control based on bilinear model for nonlinear continuous-time dynamic systems
    Li, JM
    Ma, JQ
    Wan, BW
    PROCEEDINGS OF THE 4TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-4, 2002, : 2909 - 2912
  • [48] Echo state network-based H∞ control for continuous-time nonlinear systems
    Liu, Chong
    Duan, Zhongxing
    Li, Jiajia
    Wang, Yan
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2388 - 2392
  • [49] Reinforcement Learning and Adaptive Optimal Control for Continuous-Time Nonlinear Systems: A Value Iteration Approach
    Bian, Tao
    Jiang, Zhong-Ping
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (07) : 2781 - 2790
  • [50] Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints
    Yang, Xiong
    Liu, Derong
    Wang, Ding
    INTERNATIONAL JOURNAL OF CONTROL, 2014, 87 (03) : 553 - 566