Reinforcement Learning-based control using Q-learning and gravitational search algorithm with experimental validation on a nonlinear servo system

被引:121
|
作者
Zamfirache, Iuliu Alexandru [1 ]
Precup, Radu-Emil [1 ]
Roman, Raul-Cristian [1 ]
Petriu, Emil M. [2 ]
机构
[1] Politehn Univ Timisoara, Dept Automat & Appl Informat, Bd V Parvan 2, Timisoara 300223, Romania
[2] Univ Ottawa, Sch Elect Engn & Comp Sci, 800 King Edward, Ottawa, ON K1N 6N5, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Gravitational search algorithm; NN training; Optimal reference tracking control; Q-learning; Reinforcement learning; Servo systems; PARTICLE SWARM OPTIMIZATION; FUZZY-LOGIC; STABILITY; DYNAMICS; DESIGN;
D O I
10.1016/j.ins.2021.10.070
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a novel Reinforcement Learning (RL)-based control approach that uses a combination of a Deep Q-Learning (DQL) algorithm and a metaheuristic Gravitational Search Algorithm (GSA). The GSA is employed to initialize the weights and the biases of the Neural Network (NN) involved in DQL in order to avoid the instability, which is the main drawback of the traditional randomly initialized NNs. The quality of a particular set of weights and biases is measured at each iteration of the GSA-based initialization using a fitness function aiming to achieve the predefined optimal control or learning objective. The data generated during the RL process is used in training a NN-based controller that will be able to autonomously achieve the optimal reference tracking control objective. The proposed approach is compared with other similar techniques which use different algorithms in the initialization step, namely the traditional random algorithm, the Grey Wolf Optimizer algorithm, and the Particle Swarm Optimization algorithm. The NN-based controllers based on each of these techniques are compared using performance indices specific to optimal control as settling time, rise time, peak time, overshoot, and minimum cost function value. Real-time experiments are conducted in order to validate and test the proposed new approach in the framework of the optimal reference tracking control of a nonlinear position servo system. The experimental results show the superiority of this approach versus the other three competing approaches. (c) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页码:99 / 120
页数:22
相关论文
共 50 条
  • [21] A Hand Gesture Recognition System Using EMG and Reinforcement Learning: A Q-Learning Approach
    Vasconez, Juan Pablo
    Barona Lopez, Lorena Isabel
    Valdivieso Caraguay, Angel Leonardo
    Cruz, Patricio J.
    Alvarez, Robin
    Benalcazar, Marco E.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 580 - 591
  • [22] Adaptive Reinforcement Q-Learning Algorithm for Swarm-Robot System using Pheromone Mechanism
    Shi, Zhiguo
    Tu, Jun
    Li, Yuankai
    Wang, Zeying
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2013, : 952 - 957
  • [23] Reinforcement Learning Control of Hydraulic Servo System Based on TD3 Algorithm
    Yuan, Xiaoming
    Wang, Yu
    Zhang, Ruicong
    Gao, Qiang
    Zhou, Zhuangding
    Zhou, Rulin
    Yin, Fengyuan
    MACHINES, 2022, 10 (12)
  • [24] Output Feedback Optimal Tracking Control Using Reinforcement Q-Learning
    Rizvi, Syed Ali Asad
    Lin, Zongli
    2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC), 2018, : 3423 - 3428
  • [25] Reinforcement learning inspired forwarding strategy for information centric networks using Q-learning algorithm
    Delvadia, Krishna
    Dutta, Nitul
    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2024, 37 (06)
  • [26] Q-LEARNING BASED CONTROL ALGORITHM FOR HTTP ADAPTIVE STREAMING
    Martin, Virginia
    Cabrera, Julian
    Garcia, Narciso
    2015 VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2015,
  • [27] Behavior Control Algorithm for Mobile Robot Based on Q-Learning
    Yang, Shiqiang
    Li, Congxiao
    2017 INTERNATIONAL CONFERENCE ON COMPUTER NETWORK, ELECTRONIC AND AUTOMATION (ICCNEA), 2017, : 45 - 48
  • [28] Control the population of free viruses in nonlinear uncertain HIV system using Q-learning
    Hossein Gholizade-Narm
    Amin Noori
    International Journal of Machine Learning and Cybernetics, 2018, 9 : 1169 - 1179
  • [29] Control the population of free viruses in nonlinear uncertain HIV system using Q-learning
    Gholizade-Narm, Hossein
    Noori, Amin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2018, 9 (07) : 1169 - 1179
  • [30] Path planning for autonomous mobile robot using transfer learning-based Q-learning
    Wu, Shengshuai
    Hu, Jinwen
    Zhao, Chunhui
    Pan, Quan
    PROCEEDINGS OF 2020 3RD INTERNATIONAL CONFERENCE ON UNMANNED SYSTEMS (ICUS), 2020, : 88 - 93