Optimal synchronized control of nonlinear coupled harmonic oscillators based on actor-critic reinforcement learning

被引:3
|
作者
Gu, Zhiyang [1 ]
Fan, Chengli [2 ]
Yu, Dengxiu [3 ]
Wang, Zhen [4 ]
机构
[1] Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China
[2] Air Force Engn Univ, Air & Missile Def Coll, Xian, Shaanxi, Peoples R China
[3] Northwestern Polytech Univ, Unmanned Syst Res Inst, Xian 710072, Shaanxi, Peoples R China
[4] Northwestern Polytech Univ, Ctr Opt Imagery Anal & Learning, Xian 710072, Shaanxi, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Coupled harmonic oscillator; Reinforcement learning; Backstepping control; Synchronization; Nonlinear dynamics; SYSTEMS; TRANSITION;
D O I
10.1007/s11071-023-08957-y
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
A distributed optimal control algorithm based on adaptive neural network is proposed for the synchronized control problem of a class of second-order nonlinear coupled harmonic oscillators. Firstly, the graph theory is used to establish the coupling relationship between the harmonic oscillator models; secondly, the neural network is used to fit the unknown nonlinearity in the harmonic oscillator model, and the virtual controller and the actual controller are designed based on the backstepping method; then, according to the state error and the controller, the cost function and the HJB function are designed. Since the HJB function cannot be solved directly, the critic neural network approximates its solution. The above two neural networks constitute a simplified reinforcement learning to achieve optimal consistent control of nonlinear coupled harmonic oscillators. Finally, the stability and effectiveness of the scheme are verified by the Lyapunov stability theorem and numerical simulation, respectively.
引用
收藏
页码:21051 / 21064
页数:14
相关论文
共 50 条
  • [21] Swarm Reinforcement Learning Method Based on an Actor-Critic Method
    Iima, Hitoshi
    Kuroe, Yasuaki
    SIMULATED EVOLUTION AND LEARNING, 2010, 6457 : 279 - 288
  • [22] Manipulator Motion Planning based on Actor-Critic Reinforcement Learning
    Li, Qiang
    Nie, Jun
    Wang, Haixia
    Lu, Xiao
    Song, Shibin
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 4248 - 4254
  • [23] Evaluating Correctness of Reinforcement Learning based on Actor-Critic Algorithm
    Kim, Youngjae
    Hussain, Manzoor
    Suh, Jae-Won
    Hong, Jang-Eui
    2022 THIRTEENTH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN), 2022, : 320 - 325
  • [24] Multi-actor mechanism for actor-critic reinforcement learning
    Li, Lin
    Li, Yuze
    Wei, Wei
    Zhang, Yujia
    Liang, Jiye
    INFORMATION SCIENCES, 2023, 647
  • [25] Adaptive Optimal Tracking Control of an Underactuated Surface Vessel Using Actor-Critic Reinforcement Learning
    Chen, Lin
    Dai, Shi-Lu
    Dong, Chao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7520 - 7533
  • [26] Disturbance observer based actor-critic learning control for uncertain nonlinear systems
    Liang, Xianglong
    Yao, Zhikai
    Ge, Yaowen
    Yao, Jianyong
    CHINESE JOURNAL OF AERONAUTICS, 2023, 36 (11) : 271 - 280
  • [27] Disturbance observer based actor-critic learning control for uncertain nonlinear systems
    Xianglong LIANG
    Zhikai YAO
    Yaowen GE
    Jianyong YAO
    Chinese Journal of Aeronautics , 2023, (11) : 271 - 280
  • [28] Disturbance observer based actor-critic learning control for uncertain nonlinear systems
    Xianglong LIANG
    Zhikai YAO
    Yaowen GE
    Jianyong YAO
    Chinese Journal of Aeronautics, 2023, 36 (11) : 271 - 280
  • [29] Actor-Critic Traction Control Based on Reinforcement Learning with Open-Loop Training
    Drechsler, M. Funk
    Fiorentin, T. A.
    Goellinger, H.
    MODELLING AND SIMULATION IN ENGINEERING, 2021, 2021
  • [30] USING ACTOR-CRITIC REINFORCEMENT LEARNING FOR CONTROL AND FLIGHT FORMATION OF QUADROTORS
    Torres, Edgar
    Xu, Lei
    Sardarmehni, Tohid
    PROCEEDINGS OF ASME 2022 INTERNATIONAL MECHANICAL ENGINEERING CONGRESS AND EXPOSITION, IMECE2022, VOL 5, 2022,