Safe control of nonlinear systems in LPV framework using model-based reinforcement learning

被引:7
|
作者
Bao, Yajie [1 ]
Velni, Javad Mohammadpour [1 ]
机构
[1] Univ Georgia, Sch Elect & Comp Engn, Athens, GA 30602 USA
基金
美国国家科学基金会;
关键词
Safe nonlinear control; model-based reinforcement learning; LPV framework; PREDICTIVE CONTROL; IDENTIFICATION;
D O I
10.1080/00207179.2022.2029945
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a safe model-based reinforcement learning (MBRL) approach to control nonlinear systems described by linear parameter-varying (LPV) models. A variational Bayesian inference Neural Network (BNN) approach is first employed to learn a state-space model with uncertainty quantification from input-output data collected from the system; the model is then utilised for training MBRL to learn control actions for the system with safety guarantees. Specifically, MBRL employs the BNN model to generate simulation environments for training, which avoids safety violations in the exploration stage. To adapt to dynamically varying environments, knowledge on the evolution of LPV model scheduling variables is incorporated in simulation to reduce the discrepancy between the transition distributions of simulation and real environments. Experiments on a parameter-varying double integrator system and a control moment gyroscope (CMG) simulation model demonstrate that the proposed approach can safely achieve desired control performance.
引用
收藏
页码:1078 / 1089
页数:12
相关论文
共 50 条
  • [41] A Stochastic Traffic Flow Model-Based Reinforcement Learning Framework For Advanced Traffic Signal Control
    Zhu, Yifan
    Lv, Yisheng
    Lin, Shu
    Xu, Jungang
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025, 26 (01) : 714 - 723
  • [42] Robust control for affine nonlinear systems under the reinforcement learning framework
    Guo, Wenxin
    Qin, Weiwei
    Lan, Xuguang
    Liu, Jieyu
    Zhang, Zhaoxiang
    NEUROCOMPUTING, 2024, 587
  • [43] Fault Tolerant Control combining Reinforcement Learning and Model-based Control
    Bhan, Luke
    Quinones-Grueiro, Marcos
    Biswas, Gautam
    5TH CONFERENCE ON CONTROL AND FAULT-TOLERANT SYSTEMS (SYSTOL 2021), 2021, : 31 - 36
  • [44] Provably Safe Model-Based Meta Reinforcement Learning: An Abstraction-Based Approach
    Sun, Xiaowu
    Fatnassi, Wael
    Santa Cruz, Ulices
    Shoukry, Yasser
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2963 - 2968
  • [45] Safe Model-Based Reinforcement Learning With an Uncertainty-Aware Reachability Certificate
    Yu, Dongjie
    Zou, Wenjun
    Yang, Yujie
    Ma, Haitong
    Li, Shengbo Eben
    Yin, Yuming
    Chen, Jianyu
    Duan, Jingliang
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (03) : 4129 - 4142
  • [46] Model-Based Reinforcement Learning Framework of Online Network Resource Allocation
    Bakhshi, Bahador
    Mangues-Bafalluy, Josep
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 4456 - 4461
  • [47] Uncertainty-Aware Contact-Safe Model-Based Reinforcement Learning
    Kuo, Cheng-Yu
    Schaarschmidt, Andreas
    Cui, Yunduan
    Asfour, Tamim
    Matsubara, Takamitsu
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 3918 - 3925
  • [48] Cognitive Control Predicts Use of Model-based Reinforcement Learning
    Otto, A. Ross
    Skatova, Anya
    Madlon-Kay, Seth
    Daw, Nathaniel D.
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2015, 27 (02) : 319 - 333
  • [49] Model-based hierarchical reinforcement learning and human action control
    Botvinick, Matthew
    Weinstein, Ari
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2014, 369 (1655)
  • [50] Model-based Reinforcement Learning for Continuous Control with Posterior Sampling
    Fan, Ying
    Ming, Yifei
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139