Safe control of nonlinear systems in LPV framework using model-based reinforcement learning

被引:7
|
作者
Bao, Yajie [1 ]
Velni, Javad Mohammadpour [1 ]
机构
[1] Univ Georgia, Sch Elect & Comp Engn, Athens, GA 30602 USA
基金
美国国家科学基金会;
关键词
Safe nonlinear control; model-based reinforcement learning; LPV framework; PREDICTIVE CONTROL; IDENTIFICATION;
D O I
10.1080/00207179.2022.2029945
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a safe model-based reinforcement learning (MBRL) approach to control nonlinear systems described by linear parameter-varying (LPV) models. A variational Bayesian inference Neural Network (BNN) approach is first employed to learn a state-space model with uncertainty quantification from input-output data collected from the system; the model is then utilised for training MBRL to learn control actions for the system with safety guarantees. Specifically, MBRL employs the BNN model to generate simulation environments for training, which avoids safety violations in the exploration stage. To adapt to dynamically varying environments, knowledge on the evolution of LPV model scheduling variables is incorporated in simulation to reduce the discrepancy between the transition distributions of simulation and real environments. Experiments on a parameter-varying double integrator system and a control moment gyroscope (CMG) simulation model demonstrate that the proposed approach can safely achieve desired control performance.
引用
收藏
页码:1078 / 1089
页数:12
相关论文
共 50 条
  • [1] Safe model-based reinforcement learning for nonlinear optimal control with state and input constraints
    Kim, Yeonsoo
    Kim, Jong Woo
    AICHE JOURNAL, 2022, 68 (05)
  • [2] Safe exploration in model-based reinforcement learning using control barrier functions
    Cohen, Max H.
    Belta, Calin
    AUTOMATICA, 2023, 147
  • [3] Multiple model-based reinforcement learning for nonlinear control
    Samejima, K
    Katagiri, K
    Doya, K
    Kawato, M
    ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE, 2006, 89 (09): : 54 - 69
  • [4] Safe Model-Based Reinforcement Learning for Systems With Parametric Uncertainties
    Mahmud, S. M. Nahid
    Nivison, Scott A.
    Bell, Zachary I.
    Kamalapurkar, Rushikesh
    FRONTIERS IN ROBOTICS AND AI, 2021, 8
  • [5] Model-based safe reinforcement learning for nonlinear systems under uncertainty with constraints tightening approach
    Kim, Yeonsoo
    Oh, Tae Hoon
    COMPUTERS & CHEMICAL ENGINEERING, 2024, 183
  • [6] Model-based Safe Reinforcement Learning using Variable Horizon Rollouts
    Gupta, Shourya
    Suryaman, Utkarsh
    Narava, Rahul
    Jha, Shashi Shekhar
    PROCEEDINGS OF 7TH JOINT INTERNATIONAL CONFERENCE ON DATA SCIENCE AND MANAGEMENT OF DATA, CODS-COMAD 2024, 2024, : 100 - 108
  • [7] A Configurable Model-Based Reinforcement Learning Framework for Disaggregated Storage Systems
    Jeong, Seunghwan
    Woo, Honguk
    IEEE ACCESS, 2023, 11 : 14876 - 14891
  • [8] A Safety Aware Model-Based Reinforcement Learning Framework for Systems with Uncertainties
    Mahmud, S. M. Nahid
    Hareland, Katrine
    Nivison, Scott A.
    Bell, Zachary, I
    Kamalapurkar, Rushikesh
    2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 1979 - 1984
  • [9] Model-based reinforcement learning for output-feedback optimal control of a class of nonlinear systems
    Self, Ryan
    Harlan, Michael
    Kamalapurkar, Rushikesh
    2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 2378 - 2383
  • [10] Safe Stabilization Control for Interconnected Virtual-Real Systems via Model-based Reinforcement Learning
    Tan, Junkai
    Xue, Shuangsi
    Li, Huan
    Cao, Hui
    Li, Dongyu
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 605 - 610