A modular framework for stabilizing deep reinforcement learning control

被引:1
|
作者
Lawrence, Nathan P. [1 ]
Loewen, Philip D. [1 ]
Wang, Shuyuan [2 ]
Forbes, Michael G. [3 ]
Gopaluni, R. Bhushan [2 ]
机构
[1] Univ British Columbia, Dept Math, Vancouver, BC V6T 1Z2, Canada
[2] Univ British Columbia, Dept Chem & Biol Engn, Vancouver, BC V6T 1Z3, Canada
[3] Honeywell Proc Solut, N Vancouver, BC V7J 3S4, Canada
来源
IFAC PAPERSONLINE | 2023年 / 56卷 / 02期
基金
加拿大自然科学与工程研究理事会;
关键词
Reinforcement learning; data-driven control; Youla-Ku.cera parameterization; neural; networks; stability; process control; SYSTEMS;
D O I
10.1016/j.ifacol.2023.10.923
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a framework for the design of feedback controllers that combines the optimization-driven and model-free advantages of deep reinforcement learning with the stability guarantees provided by using the Youla-Ku.cera parameterization to define the search domain. Recent advances in behavioral systems allow us to construct a data-driven internal model; this enables an alternative realization of the Youla-Ku. cera parameterization based entirely on input-output exploration data. Using a neural network to express a parameterized set of nonlinear stable operators enables seamless integration with standard deep learning libraries. We demonstrate the approach on a realistic simulation of a two-tank system. Copyright (c) 2023 The Authors.
引用
收藏
页码:8006 / 8011
页数:6
相关论文
共 50 条
  • [1] Stabilizing reinforcement learning control: A modular framework for optimizing over all stable behavior
    Lawrence, Nathan P.
    Loewen, Philip D.
    Wang, Shuyuan
    Forbes, Michael G.
    Gopaluni, R. Bhushan
    [J]. AUTOMATICA, 2024, 164
  • [2] Framework for Control and Deep Reinforcement Learning in Traffic
    Wu, Cathy
    Parvate, Kanaad
    Kheterpal, Nishant
    Dickstein, Leah
    Mehta, Ankur
    Vinitsky, Eugene
    Bayen, Alexandre M.
    [J]. 2017 IEEE 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2017,
  • [3] Enabling adaptable Industry 4.0 automation with a modular deep reinforcement learning framework
    Raziei, Zohreh
    Moghaddam, Mohsen
    [J]. IFAC PAPERSONLINE, 2021, 54 (01): : 546 - 551
  • [4] Modular Reinforcement Learning Framework for Learners and Educators
    Versaw, Rachael
    Schultz, Samantha
    Lu, Kevin
    Zhao, Richard
    [J]. PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON THE FOUNDATIONS OF DIGITAL GAMES, FDG 2021, 2021,
  • [5] A deep reinforcement learning based hyper-heuristic for modular production control
    Panzer, Marcel
    Bender, Benedict
    Gronau, Norbert
    [J]. INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2024, 62 (08) : 2747 - 2768
  • [6] Modular production control using deep reinforcement learning: proximal policy optimization
    Sebastian Mayer
    Tobias Classen
    Christian Endisch
    [J]. Journal of Intelligent Manufacturing, 2021, 32 : 2335 - 2351
  • [7] State Predictive Control of Modular SMES Magnet Based on Deep Reinforcement Learning
    Zhang, Zitong
    Shi, Jing
    Guo, Shuqiang
    Yang, Wangwang
    Lin, Dengquan
    Xu, Ying
    Ren, Li
    [J]. IEEE TRANSACTIONS ON APPLIED SUPERCONDUCTIVITY, 2022, 32 (06)
  • [8] Modular production control using deep reinforcement learning: proximal policy optimization
    Mayer, Sebastian
    Classen, Tobias
    Endisch, Christian
    [J]. JOURNAL OF INTELLIGENT MANUFACTURING, 2021, 32 (08) : 2335 - 2351
  • [9] Deep Reinforcement Learning-Based Control Framework for Multilateral Telesurgery
    Bacha, Sarah Chams
    Bai, Weibang
    Wang, Ziwei
    Xiao, Bo
    Yeatman, Eric M.
    [J]. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2022, 4 (02): : 352 - 355
  • [10] A Deep Reinforcement Learning Framework for Control of Robotic Manipulators in Simulated Environments
    Calderon-Cordova, Carlos
    Sarango, Roger
    Castillo, Darwin
    Lakshminarayanan, Vasudevan
    [J]. IEEE ACCESS, 2024, 12 : 103133 - 103161