Reinforcement learning-based optimal control of unknown constrained-input nonlinear systems using simulated experience

被引:0
|
作者
Asl, Hamed Jabbari [1 ]
Uchibe, Eiji [1 ]
机构
[1] ATR Computat Neurosci Labs, Dept Brain Robot Interface, 2-2-2 Hikaridai, Seika, Kyoto 6190288, Japan
关键词
Optimal control; Reinforcement learning; Input constraints; Uncertainty; APPROXIMATE OPTIMAL-CONTROL; TRACKING CONTROL; CONTINUOUS-TIME;
D O I
10.1007/s11071-023-08688-0
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
Reinforcement learning (RL) provides a way to approximately solve optimal control problems. Furthermore, online solutions to such problems require a method that guarantees convergence to the optimal policy while also ensuring stability during the learning process. In this study, we develop an online RL-based optimal control framework for input-constrained nonlinear systems. Its design includes two new model identifiers that learn a system's drift dynamics: a slow identifier used to simulate experience that supports the convergence of optimal problem solutions and a fast identifier that keeps the system stable during the learning phase. This approach is a critic-only design, in which a new fast estimation law is developed for a critic network. A Lyapunov-based analysis shows that the estimated control policy converges to the optimal one. Moreover, simulation studies demonstrate the effectiveness of our developed control scheme.
引用
收藏
页码:16093 / 16110
页数:18
相关论文
共 50 条
  • [1] Reinforcement learning-based optimal control of unknown constrained-input nonlinear systems using simulated experience
    Hamed Jabbari Asl
    Eiji Uchibe
    Nonlinear Dynamics, 2023, 111 : 16093 - 16110
  • [2] Reinforcement Learning-Based Nearly Optimal Control for Constrained-Input Partially Unknown Systems Using Differentiator
    Guo, Xinxin
    Yan, Weisheng
    Cui, Rongxin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (11) : 4713 - 4725
  • [3] Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning
    Modares, Hamidreza
    Lewis, Frank L.
    AUTOMATICA, 2014, 50 (07) : 1780 - 1792
  • [4] Optimal Output Feedback Control of Nonlinear Partially-Unknown Constrained-Input Systems Using Integral Reinforcement Learning
    Ren, Ling
    Zhang, Guoshan
    Mu, Chaoxu
    NEURAL PROCESSING LETTERS, 2019, 50 (03) : 2963 - 2989
  • [5] Optimal Output Feedback Control of Nonlinear Partially-Unknown Constrained-Input Systems Using Integral Reinforcement Learning
    Ling Ren
    Guoshan Zhang
    Chaoxu Mu
    Neural Processing Letters, 2019, 50 : 2963 - 2989
  • [6] Integral reinforcement learning based decentralized optimal tracking control of unknown nonlinear large-scale interconnected systems with constrained-input
    Liu, Chong
    Zhang, Huaguang
    Xiao, Geyang
    Sun, Shaoxin
    NEUROCOMPUTING, 2019, 323 : 1 - 11
  • [7] Optimal bounded policy for nonlinear tracking control of unknown constrained-input systems
    Sabahi, Farnaz
    TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2025, 47 (03) : 585 - 598
  • [8] Event-triggered-based integral reinforcement learning output feedback optimal control for partially unknown constrained-input nonlinear systems
    Zou, Haoming
    Zhang, Guoshan
    ASIAN JOURNAL OF CONTROL, 2023, 25 (05) : 3843 - 3858
  • [9] Model-free Nearly Optimal Control of Constrained-Input Nonlinear Systems Based on Synchronous Reinforcement Learning
    Zhao, Han
    Guo, Lei
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2162 - 2167
  • [10] Event-trigger-based robust control for nonlinear constrained-input systems using reinforcement learning method
    Yang, Dongsheng
    Li, Ting
    Zhang, Huaguang
    Xie, Xiangpeng
    NEUROCOMPUTING, 2019, 340 : 158 - 170