Optimal synchronized control of nonlinear coupled harmonic oscillators based on actor-critic reinforcement learning

被引:3
|
作者
Gu, Zhiyang [1 ]
Fan, Chengli [2 ]
Yu, Dengxiu [3 ]
Wang, Zhen [4 ]
机构
[1] Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China
[2] Air Force Engn Univ, Air & Missile Def Coll, Xian, Shaanxi, Peoples R China
[3] Northwestern Polytech Univ, Unmanned Syst Res Inst, Xian 710072, Shaanxi, Peoples R China
[4] Northwestern Polytech Univ, Ctr Opt Imagery Anal & Learning, Xian 710072, Shaanxi, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Coupled harmonic oscillator; Reinforcement learning; Backstepping control; Synchronization; Nonlinear dynamics; SYSTEMS; TRANSITION;
D O I
10.1007/s11071-023-08957-y
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
A distributed optimal control algorithm based on adaptive neural network is proposed for the synchronized control problem of a class of second-order nonlinear coupled harmonic oscillators. Firstly, the graph theory is used to establish the coupling relationship between the harmonic oscillator models; secondly, the neural network is used to fit the unknown nonlinearity in the harmonic oscillator model, and the virtual controller and the actual controller are designed based on the backstepping method; then, according to the state error and the controller, the cost function and the HJB function are designed. Since the HJB function cannot be solved directly, the critic neural network approximates its solution. The above two neural networks constitute a simplified reinforcement learning to achieve optimal consistent control of nonlinear coupled harmonic oscillators. Finally, the stability and effectiveness of the scheme are verified by the Lyapunov stability theorem and numerical simulation, respectively.
引用
收藏
页码:21051 / 21064
页数:14
相关论文
共 50 条
  • [1] Optimal synchronized control of nonlinear coupled harmonic oscillators based on actor–critic reinforcement learning
    Zhiyang Gu
    Chengli Fan
    Dengxiu Yu
    Zhen Wang
    Nonlinear Dynamics, 2023, 111 : 21051 - 21064
  • [2] Taming chimeras in coupled oscillators using soft actor-critic based reinforcement learning
    Ding, Jianpeng
    Lei, Youming
    Small, Michael
    CHAOS, 2025, 35 (01)
  • [3] Actor-Critic based Improper Reinforcement Learning
    Zaki, Mohammadi
    Mohan, Avinash
    Gopalan, Aditya
    Mannor, Shie
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [4] Actor-Critic Reinforcement Learning for Tracking Control in Robotics
    Pane, Yudha P.
    Nageshrao, Subramanya P.
    Babuska, Robert
    2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 5819 - 5826
  • [5] Actor-Critic Reinforcement Learning for Control With Stability Guarantee
    Han, Minghao
    Zhang, Lixian
    Wang, Jun
    Pan, Wei
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6217 - 6224
  • [6] Forward Actor-Critic for Nonlinear Function Approximation in Reinforcement Learning
    Veeriah, Vivek
    van Seijen, Harm
    Sutton, Richard S.
    AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 556 - 564
  • [7] Actor-Critic reinforcement learning based on prior knowledge
    Yang, Zhenyu, 1600, Transport and Telecommunication Institute, Lomonosova street 1, Riga, LV-1019, Latvia (18):
  • [8] Adaptive Assist-as-needed Control Based on Actor-Critic Reinforcement Learning
    Zhang, Yufeng
    Li, Shuai
    Nolan, Karen J.
    Zanotto, Damiano
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4066 - 4071
  • [9] Optimized Adaptive Nonlinear Tracking Control Using Actor-Critic Reinforcement Learning Strategy
    Wen, Guoxing
    Chen, C. L. Philip
    Ge, Shuzhi Sam
    Yang, Hongli
    Liu, Xiaoguang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (09) : 4969 - 4977
  • [10] Actor-critic reinforcement learning for the feedback control of a swinging chain
    Dengler, C.
    Lohmann, B.
    IFAC PAPERSONLINE, 2018, 51 (13): : 378 - 383