Actor-Critic Algorithm for Optimal Synchronization of Kuramoto Oscillator

被引:0
|
作者
Vrushabh, D. [1 ]
Shalini, K. [1 ]
Sonam, K. [1 ]
机构
[1] Veermata Jijabai Technol Inst, EED, Mumbai, Maharashtra, India
关键词
Reinforcement learning; Hamilton-Jacobi-Bellman; Approximate Dynamic Programming; Kuramoto oscillator; Mean-field game; Order parameter; NETWORKS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper constructs a reinforcement learning (RL) based algorithm of Actor-Critic (AC) for the optimal synchronism of the Kuramoto oscillator. This is accomplished through the Ott-Antonsen ansatz framework for the dynamics of large interactive unit networks. Besides, this approach reduces the infinite-dimensional dynamics to phase space flow, i.e., low dimensional dynamics for certain systems of globally coupled phase oscillators. The resulting Hamiltonian-Jacobi-Bellman (HJB) expression is extremely difficult to solve in general, therefore this paper introduces the AC method for learning approximate optimal control laws for the Kuramoto oscillator model. RL has been contemplated as one of the efficient methods to solve optimal control of non-linear systems. For a collection of non-homogeneous oscillators, the states are elucidated as phase angles, which is the modification of the model for a coupled Kuramoto oscillator. An admissible initial control policy for the Kuramoto oscillator model is designed and solved using RL giving an approximate solution of the optimal control problem. Finally, local synchronism of the coupled Kuramoto oscillator model is supported through simulations analysis.
引用
收藏
页码:391 / 396
页数:6
相关论文
共 50 条
  • [31] Actor-Critic Instance Segmentation
    Araslanov, Nikita
    Rothkopf, Constantin A.
    Roth, Stefan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 8229 - 8238
  • [32] Procurement auctions using actor-critic type learning algorithm
    Raju, CVL
    Narahari, Y
    Shah, S
    2003 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-5, CONFERENCE PROCEEDINGS, 2003, : 4588 - 4594
  • [33] An accelerated asynchronous advantage actor-critic algorithm applied in papermaking
    Wang, Xuechun
    Zhuang, Zhiwei
    Zou, Luobao
    Zhang, Weidong
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 8637 - 8642
  • [34] Efficient Actor-critic Algorithm with Dual Piecewise Model Learning
    Zhong, Shan
    Liu, Quan
    Gong, Shengrong
    Fu, Qiming
    Xu, Jin
    2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017, : 823 - 830
  • [35] Actor-Critic or Critic-Actor? A Tale of Two Time Scales
    Bhatnagar, Shalabh
    Borkar, Vivek S.
    Guin, Soumyajit
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 2671 - 2676
  • [36] Noisy Importance Sampling Actor-Critic: An Off-Policy Actor-Critic With Experience Replay
    Tasfi, Norman
    Capretz, Miriam
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [37] Evaluating Correctness of Reinforcement Learning based on Actor-Critic Algorithm
    Kim, Youngjae
    Hussain, Manzoor
    Suh, Jae-Won
    Hong, Jang-Eui
    2022 THIRTEENTH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN), 2022, : 320 - 325
  • [38] A Sample-Efficient Actor-Critic Algorithm for Recommendation Diversification
    Li, Shuang
    Yan, Yanghui
    Ren, Ju
    Zhou, Yuezhi
    Zhang, Yaoxue
    CHINESE JOURNAL OF ELECTRONICS, 2020, 29 (01) : 89 - 96
  • [39] THE ACTOR-CRITIC ALGORITHM FOR INFINITE HORIZON DISCOUNTED COST REVISITED
    Gosavi, Abhijit
    2020 WINTER SIMULATION CONFERENCE (WSC), 2020, : 2867 - 2878
  • [40] A Sample-Efficient Actor-Critic Algorithm for Recommendation Diversification
    LI Shuang
    YAN Yanghui
    REN Ju
    ZHOU Yuezhi
    ZHANG Yaoxue
    ChineseJournalofElectronics, 2020, 29 (01) : 89 - 96