Bayesian Optimization for Efficient Tuning of Visual Servo and Computed Torque Controllers in a Reinforcement Learning Scenario

被引:0
|
作者
Ribeiro, Eduardo G. [1 ]
Mendes, Raul Q. [1 ]
Terra, Marco H. [1 ]
Grassi Jr, Valdir [1 ]
机构
[1] Univ Sao Paulo, Sao Carlos Sch Engn, Dept Elect & Comp Engn, Sao Carlos, Brazil
基金
巴西圣保罗研究基金会;
关键词
GLOBAL OPTIMIZATION;
D O I
10.1109/ICAR53236.2021.9659363
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although the search for optimal parameters is a central concern for the design stage of control systems, this adjustment is generally not optimized in the design of visual servo controllers. However, for a classic position-based visual servo controller, the choice of the proportional gain that multiplies the computed error may directly affect the system's performance, and may even lead to instability. On the other hand, adjusting such a parameter can be a time-consuming and hard-working task. Thus, in this work, we propose to automate the search for the linear and angular gains of a visual servo controller through Bayesian optimization. We simulate the environment in Matlab with a Kinova GEN3 7DOF robot in a reinforcement learning scenario, in which the projected cost function is evaluated directly on the robot. We demonstrate that Bayesian optimization is capable of finding the visual servo controller gains, as well as the robot internal controller gains, with up to 13 and 14 times fewer iterations when compared to an on-police actor-critic model-free algorithm and the genetic algorithm respectively. Furthermore, we show that the obtained controller performs better considering different control performance parameters and in qualitative evaluations regarding the Cartesian and image spaces.
引用
收藏
页码:282 / 289
页数:8
相关论文
共 50 条
  • [1] Tuning Legged Locomotion Controllers via Safe Bayesian Optimization
    Widmer, Daniel
    Kang, Dongho
    Sukhija, Bhavya
    Hubotter, Jonas
    Krause, Andreas
    Coros, Stelian
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [2] Tuning fuzzy PD and PI controllers using reinforcement learning
    Boubertakh, Hamid
    Tadjine, Mohamed
    Glorennec, Pierre-Yves
    Labiod, Salim
    ISA TRANSACTIONS, 2010, 49 (04) : 543 - 551
  • [3] REINFORCEMENT LEARNING FOR TUNING PARAMETERS OF CLOSED-LOOP CONTROLLERS
    Serafini, M. C.
    Rosales, N.
    Garelli, F.
    DIABETES TECHNOLOGY & THERAPEUTICS, 2021, 23 : A84 - A85
  • [4] Efficient tuning of Individual Pitch Control: A Bayesian Optimization Machine Learning approach
    Mulders, S. P.
    Pamososuryo, A. K.
    van Wingerden, J. W.
    SCIENCE OF MAKING TORQUE FROM WIND (TORQUE 2020), PTS 1-5, 2020, 1618
  • [5] Reinforcement Learning for Image-Based Visual Servo Control
    Dani, Ashwin P.
    Bhasin, Shubhendu
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 4358 - 4363
  • [6] Meta-reinforcement learning for the tuning of PI controllers: An offline approach
    McClement, Daniel G.
    Lawrence, Nathan P.
    Backstroem, Johan U.
    Loewen, Philip D.
    Forbes, Michael G.
    Gopaluni, R. Bhushan
    JOURNAL OF PROCESS CONTROL, 2022, 118 : 139 - 152
  • [7] Deep reinforcement learning with shallow controllers: An experimental application to PID tuning
    Lawrence, Nathan P.
    Forbes, Michael G.
    Loewen, Philip D.
    McClement, Daniel G.
    Backstrom, Johan U.
    Gopaluni, R. Bhushan
    CONTROL ENGINEERING PRACTICE, 2022, 121
  • [8] Tuning path tracking controllers for autonomous cars using reinforcement learning
    Carrasco, Ana Vilaca
    Sequeira, Joao Silva
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [9] Tuning path tracking controllers for autonomous cars using reinforcement learning
    Carrasco A.V.
    Sequeira J.S.
    PeerJ Computer Science, 2023, 9
  • [10] Tuning of PID Controllers Using Reinforcement Learning for Nonlinear System Control
    Bujgoi, Gheorghe
    Sendrescu, Dorin
    PROCESSES, 2025, 13 (03)