H∞ Tracking Control for Linear Discrete-Time Systems: Model-Free Q-Learning Designs

被引:35
|
作者
Yang, Yunjie [1 ]
Wan, Yan [2 ]
Zhu, Jihong [1 ]
Lewis, Frank L. [3 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
[2] Univ Texas Arlington, Dept Elect Engn, Arlington, TX 76019 USA
[3] Univ Texas Arlington, UTA Res Inst, Ft Worth, TX 75052 USA
来源
IEEE CONTROL SYSTEMS LETTERS | 2021年 / 5卷 / 01期
基金
中国国家自然科学基金;
关键词
Linear discrete-time systems; H-infinity tracking control; Q-learning; ZERO-SUM GAMES;
D O I
10.1109/LCSYS.2020.3001241
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this letter, a novel model-free Q-learning based approach is developed to solve the H-infinity tracking problem for linear discrete-time systems. A new exponential discounted value function is introduced that includes the cost of the whole control input and tracking error. The tracking Bellman equation and the game algebraic Riccati equation (GARE) are derived. The solution to the GARE leads to the feedback and feedforward parts of the control input. A Q-learning algorithm is then developed to learn the solution of the GARE online without requiring any knowledge of the system dynamics. Convergence of the algorithm is analyzed, and it is also proved that probing noises in maintaining the persistence of excitation (PE) condition do not result in any bias. An example of the F-16 aircraft short period dynamics is developed to validate the proposed algorithm.
引用
收藏
页码:175 / 180
页数:6
相关论文
共 50 条
  • [21] Model-free aperiodic tracking for discrete-time systems using hierarchical reinforcement learning
    Tian, Yingqiang
    Wan, Haiying
    Karimi, Hamid Reza
    Luan, Xiaoli
    Liu, Fei
    NEUROCOMPUTING, 2024, 609
  • [22] Model-free distributed optimal control for general discrete-time linear systems using reinforcement learning
    Feng, Xinjun
    Zhao, Zhiyun
    Yang, Wen
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2024, 34 (09) : 5570 - 5589
  • [23] An ADDHP-based Q-learning algorithm for optimal tracking control of linear discrete-time systems with unknown dynamics
    Mu, Chaoxu
    Zhao, Qian
    Sun, Changyin
    Gao, Zhongke
    APPLIED SOFT COMPUTING, 2019, 82
  • [24] Model-free optimal tracking control for linear discrete-time stochastic systems subject to additive and multiplicative noises
    Yin Y.-B.
    Luo S.-X.
    Wan T.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2023, 40 (06): : 1014 - 1022
  • [25] H∞ tracking control for linear discrete-time systems via reinforcement learning
    Liu, Ying-Ying
    Wang, Zhan-Shan
    Shi, Zhan
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2020, 30 (01) : 282 - 301
  • [26] Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach
    Vamvoudakis, Kyriakos G.
    SYSTEMS & CONTROL LETTERS, 2017, 100 : 14 - 20
  • [27] Optimal trajectory tracking for uncertain linear discrete-time systems using time-varying Q-learning
    Geiger, Maxwell
    Narayanan, Vignesh
    Jagannathan, Sarangapani
    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2024, 38 (07) : 2340 - 2368
  • [28] Online Adaptive Optimal Control of Discrete-time Linear Systems via Synchronous Q-learning
    Li, Xinxing
    Wang, Xueyuan
    Zha, Wenzhong
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 2024 - 2029
  • [29] Adjustable Iterative Q-Learning Schemes for Model-Free Optimal Tracking Control
    Qiao, Junfei
    Zhao, Mingming
    Wang, Ding
    Ha, Mingming
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2024, 54 (02): : 1202 - 1213
  • [30] Optimal tracking control for discrete-time modal persistent dwell time switched systems based on Q-learning
    Zhang, Xuewen
    Wang, Yun
    Xia, Jianwei
    Li, Feng
    Shen, Hao
    OPTIMAL CONTROL APPLICATIONS & METHODS, 2023, 44 (06): : 3327 - 3341