Interpretable Control by Reinforcement Learning

被引:3
|
作者
Hein, Daniel [1 ]
Limmer, Steffen [1 ]
Runkler, Thomas A. [1 ]
机构
[1] Siemens AG, Corp Technol, Otto Hahn Ring 6, D-81739 Munich, Germany
来源
IFAC PAPERSONLINE | 2020年 / 53卷 / 02期
关键词
Human supervised control; learning control; LQR; PID; fuzzy control; PARTICLE SWARM OPTIMIZATION;
D O I
10.1016/j.ifacol.2020.12.2277
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, three recently introduced reinforcement learning (RL) methods are used to generate human-interpretable policies for the cart-pole balancing benchmark. The novel RL methods learn human-interpretable policies in the form of compact fuzzy controllers and simple algebraic equations. The representations as well as the achieved control performances are compared with two classical controller design methods and three non-interpretable RL methods. All eight methods utilize the same previously generated data batch and produce their controller offline - without interaction with the real benchmark dynamics. The experiments show that the novel RL methods are able to automatically generate well-performing policies which are at the same time human-interpretable. Furthermore, one of the methods is applied to automatically learn an equation-based policy for a hardware cart-pole demonstrator by using only human-player-generated batch data. The solution generated in the first attempt already represents a successful balancing policy, which demonstrates the methods applicability to realworld problems. Copyright (C) 2020 The Authors.
引用
收藏
页码:8082 / 8089
页数:8
相关论文
共 50 条
  • [1] Methodology for Interpretable Reinforcement Learning Model for HVAC Energy Control
    Kotevska, Olivera
    Munk, Jeffrey
    Kurte, Kuldeep
    Du, Yan
    Amasyali, Kadir
    Smith, Robert W.
    Zandi, Helia
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 1555 - 1564
  • [2] A survey on interpretable reinforcement learning
    Glanois, Claire
    Weng, Paul
    Zimmer, Matthieu
    Li, Dong
    Yang, Tianpei
    Hao, Jianye
    Liu, Wulong
    [J]. MACHINE LEARNING, 2024, 113 (08) : 5847 - 5890
  • [3] Programmatically Interpretable Reinforcement Learning
    Verma, Abhinav
    Murali, Vijayaraghavan
    Singh, Rishabh
    Kohli, Pushmeet
    Chaudhuri, Swarat
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [4] Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
    Xie, Yuansheng
    Vosoughi, Soroush
    Hassanpour, Saeed
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 5067 - 5074
  • [5] Evolving interpretable decision trees for reinforcement learning
    Costa, Vinicius G.
    Perez-Aracil, Jorge
    Salcedo-Sanz, Sancho
    Pedreira, Carlos E.
    [J]. ARTIFICIAL INTELLIGENCE, 2024, 327
  • [6] Interpretable Reinforcement Learning with Multilevel Subgoal Discovery
    Demin, Alexander
    Ponomaryov, Denis
    [J]. 2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 251 - 258
  • [7] Interpretable policies for reinforcement learning by genetic programming
    Hein, Daniel
    Udluft, Steffen
    Runkler, Thomas A.
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2018, 76 : 158 - 169
  • [8] π-Light: Programmatic Interpretable Reinforcement Learning for Resource-Limited Traffic Signal Control
    Gu, Yin
    Zhang, Kai
    Liu, Qi
    Gao, Weibo
    Li, Longfei
    Zhou, Jun
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21107 - 21115
  • [9] Verifiable and Interpretable Reinforcement Learning through Program Synthesis
    Verma, Abhinav
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 9902 - 9903
  • [10] Interpretable policies for reinforcement learning by empirical fuzzy sets
    Huang, Jianfeng
    Angelov, Plamen P.
    Yin, Chengliang
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 91