Data-Driven Optimal Controller Design for Maglev Train: Q-Learning Method

被引:1
|
作者
Xin, Liang [1 ]
Jiang, Hongwei [2 ]
Wen, Tao [1 ]
Long, Zhiqiang [1 ]
机构
[1] Natl Univ Def Technol, Coll Intelligence Sci & Technol, Changsha 410073, Peoples R China
[2] CRRC Zhuzhou Locomot Co Ltd, Zhuzhou 412001, Hunan, Peoples R China
基金
国家重点研发计划;
关键词
Maglev train; Data-Ddriven Optimal Controller; Q-learning; TRACKING CONTROL; REINFORCEMENT; SYSTEMS;
D O I
10.1109/CCDC55256.2022.10033516
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The maglev train is an open-loop and unstable complex nonlinear system. Generally, design of offline controllers based on a single operating state. However, the system of maglev train will influence by various complex factors in actual operation. When the system model changes, the controller is designed and tuned offline will suffer severe performance degradation that will affect the system's stability. This paper proposes a Data-Ddriven Optimal Controller (DDOC) based on the Q-learning theory in reinforcement learning in response to this problem. The controller does not need to know the model information of the controlled object, only calculates iteratively based on the system's real-time input, output data, which has the advantages of fewer tuning parameters and fast convergence speed. For the problem that system model change during operation, the method proposed in this paper makes the system accurately track the given reference signal by dynamically and rapidly changing the parameters of feedback gain matrix through the real-time data of the system, thus ensuring the stability and reliability of the control system.
引用
收藏
页码:1289 / 1294
页数:6
相关论文
共 50 条
  • [31] Design of a Data-Driven Internal Model Controller
    Fujita, Junya
    Yamamoto, Toru
    2009 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, SENSING AND CONTROL, VOLS 1 AND 2, 2009, : 261 - 265
  • [32] Data-driven tracking control approach for linear systems by on-policy Q-learning approach
    Zhang Yihan
    Mao Zhenfei
    Li Jinna
    16TH IEEE INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV 2020), 2020, : 1066 - 1070
  • [33] ADD: Application and Data-Driven Controller Design
    Lin, Yikai
    Shao, Yuru
    Zhu, Xiao
    Guo, Junpeng
    Barton, Kira
    Mao, Z. Morley
    SOSR '19: PROCEEDINGS OF THE 2019 ACM SYMPOSIUM ON SDN RESEARCH, 2019, : 84 - 90
  • [34] Design and Application of a Data-Driven PID Controller
    Wakitani, Shin
    Yamamoto, Toru
    2014 IEEE CONFERENCE ON CONTROL APPLICATIONS (CCA), 2014, : 1443 - 1448
  • [35] Q-Learning for Continuous-Time Linear Systems: A Data-Driven Implementation of the Kleinman Algorithm
    Possieri, Corrado
    Sassano, Mario
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2022, 52 (10): : 6487 - 6497
  • [36] Data-driven relative position detection technology for high-speed maglev train
    He, Yongxiang
    Wu, Jun
    Xie, Guanglei
    Hong, Xiaobo
    Zhang, Yunzhou
    MEASUREMENT, 2021, 180
  • [37] Data-driven optimal tuning of PID controller parameters
    Liu, Ning
    Chai, Tianyou
    Zhang, Yajun
    Gao, Weinan
    SCIENCE CHINA-INFORMATION SCIENCES, 2025, 68 (07)
  • [38] Data-driven relative position detection technology for high-speed maglev train
    He, Yongxiang
    Wu, Jun
    Xie, Guanglei
    Hong, Xiaobo
    Zhang, Yunzhou
    Measurement: Journal of the International Measurement Confederation, 2021, 180
  • [40] Data-Driven Learning of Q-Matrix
    Liu, Jingchen
    Xu, Gongjun
    Ying, Zhiliang
    APPLIED PSYCHOLOGICAL MEASUREMENT, 2012, 36 (07) : 548 - 564