On Distributed Model-Free Reinforcement Learning Control With Stability Guarantee

被引:2
|
作者
Mukherjee, Sayak [1 ]
Vu, Thanh Long [1 ]
机构
[1] Pacific Northwest Natl Lab, Optimizat & Control Grp, Richland, WA 99354 USA
来源
IEEE CONTROL SYSTEMS LETTERS | 2021年 / 5卷 / 05期
关键词
Feedback control; Power system stability; Eigenvalues and eigenfunctions; Decision making; Computational modeling; Mathematical model; Dynamical systems; Distributed control; learning control; reinforcement learning; stability guarantee; interconnected systems; TIME LINEAR-SYSTEMS; DESIGN;
D O I
10.1109/LCSYS.2020.3041218
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed learning can enable scalable and effective decision making in numerous complex cyber-physical systems such as smart transportation, robotics swarm, power systems, etc. However, stability of the system is usually not guaranteed in most existing learning paradigms; and this limitation can hinder the wide deployment of machine learning in decision making of safety-critical systems. This letter presents a stability-guaranteed distributed reinforcement learning (SGDRL) framework for interconnected linear subsystems, without knowing the subsystem models. While the learning process requires data from a peer-to-peer (p2p) communication architecture, the control implementation of each subsystem is only based on its local states. The stability of the interconnected subsystems will be ensured by a diagonally dominant eigenvalue condition, which will then be used in a model-free RL algorithm to learn the stabilizing control gains. The RL algorithm structure follows an off-policy iterative framework, with interleaved policy evaluation and policy update steps. We numerically validate our theoretical results by performing simulations on four interconnected sub-systems.
引用
收藏
页码:1615 / 1620
页数:6
相关论文
共 50 条
  • [31] Distributed synchronization based on model-free reinforcement learning in wireless ad hoc networks
    Zhang, Hang
    Yan, Dongqi
    Zhang, Yanxi
    Liu, Jiamu
    Yao, Mingwu
    [J]. COMPUTER NETWORKS, 2023, 227
  • [32] A Proof of Stability of Model-Free Control
    Delaleau, Emmanuel
    [J]. 2014 IEEE CONFERENCE ON NORBERT WIENER IN THE 21ST CENTURY (21CW), 2014,
  • [33] Model-free MIMO control tuning of a chiller process using reinforcement learning
    Rosdahl, Christian
    Bernhardsson, B. O.
    Eisenhower, Bryan
    [J]. SCIENCE AND TECHNOLOGY FOR THE BUILT ENVIRONMENT, 2023, 29 (08) : 782 - 794
  • [34] Online model-free reinforcement learning for the automatic control of a flexible wing aircraft
    Abouheaf, Mohammed
    Gueaieb, Wail
    Lewis, Frank
    [J]. IET CONTROL THEORY AND APPLICATIONS, 2020, 14 (01): : 73 - 84
  • [35] Model-free Based Reinforcement Learning Control Strategy of Aircraft Attitude Systems
    Huang, Dingcui
    Hu, Jiangping
    Peng, Zhinan
    Chen, Bo
    Hao, Mingrui
    Ghosh, Bijoy Kumar
    [J]. 2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 743 - 748
  • [36] Adaptive optics control with multi-agent model-free reinforcement learning
    Pou, B.
    Ferreira, F.
    Quinones, E.
    Gratadour, D.
    Martin, M.
    [J]. OPTICS EXPRESS, 2022, 30 (02) : 2991 - 3015
  • [37] A model-free deep reinforcement learning approach for control of exoskeleton gait patterns
    Rose, Lowell
    Bazzocchi, Michael C. F.
    Nejat, Goldie
    [J]. ROBOTICA, 2022, 40 (07) : 2189 - 2214
  • [38] Control of a Wave Energy Converter Using Model-free Deep Reinforcement Learning
    Chen, Kemeng
    Huang, Xuanrui
    Lin, Zechuan
    Xiao, Xi
    [J]. 2024 UKACC 14TH INTERNATIONAL CONFERENCE ON CONTROL, CONTROL, 2024, : 1 - 6
  • [39] Model-free Data-driven Predictive Control Using Reinforcement Learning
    Sawant, Shambhuraj
    Reinhardt, Dirk
    Kordabad, Arash Bahari
    Gros, Sebastien
    [J]. 2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 4046 - 4052
  • [40] Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control
    Biemann, Marco
    Scheller, Fabian
    Liu, Xiufeng
    Huang, Lizhen
    [J]. APPLIED ENERGY, 2021, 298