Adaptive Event-based Reinforcement Learning Control

被引:0
|
作者
Meng, Fancheng [1 ,2 ,3 ]
An, Aimin [1 ,2 ]
Li, Erchao [2 ,3 ]
Yang, Shuo [1 ]
机构
[1] Lanzhou Univ Technol, Coll Elect & Informat Engn, Lanzhou 730050, Peoples R China
[2] Key Lab Gansu Adv Control Ind Proc, Lanzhou 730050, Peoples R China
[3] Natl Expt Teaching Ctr Elect & Control Engn, Lanzhou 730050, Peoples R China
关键词
Reinforcement Learning; Event State; ETRL; WNNSR; Sample Reuse; SYSTEMS;
D O I
10.1109/ccdc.2019.8832922
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reinforcement learning (RL) methods have been successfully used to deal with the control and/or decision making problem of many engineering domains, such as industrial manufacturing, power management, industrial robot and rehabilitation robotic system, etc. However, the state-based method comes into difficulty when solving the control and/or decision making of high dimensional systems due to the computation loads and storage requirements. In addition, to acquire better control results, the state based RL method often requests as many states as possible to be exploited and explored, thus this method, in effect, does not suit to solve the control problems of those unknown and/or partial known systems. To solve those problems, a new adaptive event-based reinforcement learning algorithm (ETRL) is proposed in the paper. In the proposed ETRL approach, an event generator is employed firstly to sample a set of states (i.e., event state, shortened to ES in the paper) by effective event sampling strategies from the unknown system state space. Then Q learning controller uses the ES and ES-based reinforcement signal (i.e., reward feedback) to guide and adjust the control law. Moreover, an adaptive weighted nearest neighbor and sample reuse method (WNNSR) to sample the most sensitive actions is proposed to guarantee both control performance and stability of the proposed ETRL during the process of learning. Finally, convergence analysis verifies the proposed ETRL approach.
引用
收藏
页码:3471 / 3476
页数:6
相关论文
共 50 条
  • [1] Event-Based Deep Reinforcement Learning for Quantum Control
    Yu, Haixu
    Zhao, Xudong
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (01): : 548 - 562
  • [2] Learning and Guaranteed Cost Control With Event-Based Adaptive Critic Implementation
    Wang, Ding
    Liu, Derong
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (12) : 6004 - 6014
  • [3] Event-triggered Suboptimal Control Based Adaptive Reinforcement Learning
    Ma, Tengcong
    Song, Ruizhuo
    [J]. 2021 7TH INTERNATIONAL CONFERENCE ON ROBOTICS AND ARTIFICIAL INTELLIGENCE, ICRAI 2021, 2021, : 64 - 69
  • [4] Event-based neural learning for quadrotor control
    Estéban Carvalho
    Pierre Susbielle
    Nicolas Marchand
    Ahmad Hably
    Jilles S. Dibangoye
    [J]. Autonomous Robots, 2023, 47 : 1213 - 1228
  • [5] Event-based neural learning for quadrotor control
    Carvalho, Esteban
    Susbielle, Pierre
    Marchand, Nicolas
    Hably, Ahmad
    Dibangoye, Jilles S.
    [J]. AUTONOMOUS ROBOTS, 2023, 47 (08) : 1213 - 1228
  • [6] Learning Adaptive Parameter Representation for Event-Based Video Reconstruction
    Gu, Daxin
    Li, Jia
    Zhu, Lin
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 1950 - 1954
  • [7] Event-Based Obstacle Sensing and Avoidance for an UAV Through Deep Reinforcement Learning
    Hu, Xinyu
    Liu, Zhihong
    Wang, Xiangke
    Yang, Lingjie
    Wang, Guanzheng
    [J]. ARTIFICIAL INTELLIGENCE, CICAI 2022, PT III, 2022, 13606 : 402 - 413
  • [8] Deep Reinforcement Learning-Based Traffic Signal Control Using High-Resolution Event-Based Data
    Wang, Song
    Xie, Xu
    Huang, Kedi
    Zeng, Junjie
    Cai, Zimin
    [J]. ENTROPY, 2019, 21 (08)
  • [9] Adaptive linearization control based on reinforcement learning
    Hwang, KS
    Chiou, JY
    [J]. 2002 IEEE REGION 10 CONFERENCE ON COMPUTERS, COMMUNICATIONS, CONTROL AND POWER ENGINEERING, VOLS I-III, PROCEEDINGS, 2002, : 1483 - 1486
  • [10] Adaptive Critic Designs for Solving Event-Based H∞ Control Problems
    Wang, Ding
    Mu, Chaoxu
    Liu, Derong
    [J]. 2017 AMERICAN CONTROL CONFERENCE (ACC), 2017, : 2435 - 2440