Optimal synchronized control of nonlinear coupled harmonic oscillators based on actor–critic reinforcement learning

被引:0
|
作者
Zhiyang Gu
Chengli Fan
Dengxiu Yu
Zhen Wang
机构
[1] Northwestern Polytechnical University,School of Automation
[2] Air Force Engineering University,Air and Missile Defense College
[3] Northwestern Polytechnical University,Unmanned System Research Institute
[4] Northwestern Polytechnical University,Center for Optical Imagery Analysis and Learning
来源
Nonlinear Dynamics | 2023年 / 111卷
关键词
Coupled harmonic oscillator; Reinforcement learning; Backstepping control; Synchronization; Nonlinear dynamics;
D O I
暂无
中图分类号
学科分类号
摘要
A distributed optimal control algorithm based on adaptive neural network is proposed for the synchronized control problem of a class of second-order nonlinear coupled harmonic oscillators. Firstly, the graph theory is used to establish the coupling relationship between the harmonic oscillator models; secondly, the neural network is used to fit the unknown nonlinearity in the harmonic oscillator model, and the virtual controller and the actual controller are designed based on the backstepping method; then, according to the state error and the controller, the cost function and the HJB function are designed. Since the HJB function cannot be solved directly, the critic neural network approximates its solution. The above two neural networks constitute a simplified reinforcement learning to achieve optimal consistent control of nonlinear coupled harmonic oscillators. Finally, the stability and effectiveness of the scheme are verified by the Lyapunov stability theorem and numerical simulation, respectively.
引用
下载
收藏
页码:21051 / 21064
页数:13
相关论文
共 50 条
  • [1] Optimal synchronized control of nonlinear coupled harmonic oscillators based on actor-critic reinforcement learning
    Gu, Zhiyang
    Fan, Chengli
    Yu, Dengxiu
    Wang, Zhen
    NONLINEAR DYNAMICS, 2023, 111 (22) : 21051 - 21064
  • [2] Actor-Critic based Improper Reinforcement Learning
    Zaki, Mohammadi
    Mohan, Avinash
    Gopalan, Aditya
    Mannor, Shie
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [3] Actor-Critic Reinforcement Learning for Tracking Control in Robotics
    Pane, Yudha P.
    Nageshrao, Subramanya P.
    Babuska, Robert
    2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 5819 - 5826
  • [4] Actor Critic Deep Reinforcement Learning for Neural Malware Control
    Wang, Yu
    Stokes, Jack W.
    Marinescu, Mady
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 1005 - 1012
  • [5] Actor-Critic Reinforcement Learning for Control With Stability Guarantee
    Han, Minghao
    Zhang, Lixian
    Wang, Jun
    Pan, Wei
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6217 - 6224
  • [6] Forward Actor-Critic for Nonlinear Function Approximation in Reinforcement Learning
    Veeriah, Vivek
    van Seijen, Harm
    Sutton, Richard S.
    AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 556 - 564
  • [7] Optimized Adaptive Nonlinear Tracking Control Using Actor-Critic Reinforcement Learning Strategy
    Wen, Guoxing
    Chen, C. L. Philip
    Ge, Shuzhi Sam
    Yang, Hongli
    Liu, Xiaoguang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (09) : 4969 - 4977
  • [8] Practical Critic Gradient based Actor Critic for On-Policy Reinforcement Learning
    Gurumurthy, Swaminathan
    Manchester, Zachary
    Kolter, J. Zico
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
  • [9] Adaptive Assist-as-needed Control Based on Actor-Critic Reinforcement Learning
    Zhang, Yufeng
    Li, Shuai
    Nolan, Karen J.
    Zanotto, Damiano
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4066 - 4071
  • [10] Actor-Critic reinforcement learning based on prior knowledge
    Yang, Zhenyu, 1600, Transport and Telecommunication Institute, Lomonosova street 1, Riga, LV-1019, Latvia (18):