A Reinforcement Learning Scheme of Fuzzy Rules with Reduced Conditions

被引:0
|
作者
Kawakami, Hiroshi [1 ]
Katai, Osamu [1 ]
Konishi, Tadataka [2 ]
机构
[1] Departent of Systems Science, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Kyoto,606-8501, Japan
[2] Department of Information Technology, Faculty of Engineering, Okayama University, 3-1-1 Tsushima-Naka, Okayama,700-8530, Japan
关键词
Fuzzy control - Fuzzy inference - Fuzzy rules;
D O I
10.20965/jaciii.2000.p0146
中图分类号
学科分类号
摘要
This paper proposes a new method of Q-leaming for the case where the states (conditions) and actions of systems are assumed to be continuous. The components of Q-tables are interpolated by fuzzy inference. The initial set of fuzzy rules is made of all combinations of conditions and actions relevant to the problem. Each rule is then associated with a value by which the Q-values of condition/action pairs are estimated. The values are revised by the Q-leaming algorithm so as to make the fuzzy rule system effective. Although this framework may require a huge number of the initial fuzzy rules, we will show that considerable reduction of the number can be done by adopting what we call Condition Reduced Fuzzy Rules (CRFR). The antecedent part of CRFR consists of all actions and the selected conditions, and its consequent is set to be its Q-value. Finally, experimental results show that controllers with CRFRs perform equally well to the system with the most detailed fuzzy control rules, while the total number of parameters that have to be revised through the whole learning process is considerably reduced, and the number of the revised parameters at each step of learning increased. © Fuji Technology Press Ltd. Creative Commons CC BY-ND: This is an Open Access article distributed under the terms of the Creative Commons Attribution-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nd/4.0/).
引用
下载
收藏
页码:146 / 151
相关论文
共 50 条
  • [41] Reinforcement learning in the fuzzy classifier system
    Valenzuela-Rendon, M
    EXPERT SYSTEMS WITH APPLICATIONS, 1998, 14 (1-2) : 237 - 247
  • [42] Incorporating fuzzy logic to reinforcement learning
    Faria, G
    Romero, RAF
    NINTH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE 2000), VOLS 1 AND 2, 2000, : 847 - 852
  • [43] Fuzzy Rule Interpolation and Reinforcement Learning
    Vincze, David
    2017 IEEE 15TH INTERNATIONAL SYMPOSIUM ON APPLIED MACHINE INTELLIGENCE AND INFORMATICS (SAMI), 2017, : 173 - 178
  • [44] Policy gradient fuzzy reinforcement learning
    Wang, XN
    Xu, X
    He, HG
    PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2004, : 992 - 995
  • [45] Comparing Reinforcement Learning and Human Learning With the Game of Hidden Rules
    Pulick, Eric M.
    Menkov, Vladimir
    Mintz, Yonatan D.
    Kantor, Paul B.
    Bier, Vicki M.
    IEEE ACCESS, 2024, 12 : 65362 - 65372
  • [46] Learning to Control Two-Wheeled Self-Balancing Robot Using Reinforcement Learning Rules and Fuzzy Neural Networks
    Ruan, Xiaogang
    Cai, Jianxian
    Chen, Jing
    ICNC 2008: FOURTH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, VOL 4, PROCEEDINGS, 2008, : 395 - 398
  • [47] Fuzzy Q-Learning for generalization of reinforcement learning
    Berenji, HR
    FUZZ-IEEE '96 - PROCEEDINGS OF THE FIFTH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-3, 1996, : 2208 - 2214
  • [48] Fuzzy inductive logic programming: Learning fuzzy rules with their implication
    Serrurier, M
    Sudkamp, T
    Dubois, D
    Prade, H
    FUZZ-IEEE 2005: PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS: BIGGEST LITTLE CONFERENCE IN THE WORLD, 2005, : 613 - 618
  • [49] A slow reinforcement learning scheme for selective predation
    Tsoularis, A.
    JOURNAL OF BIOLOGICAL SYSTEMS, 2007, 15 (02) : 109 - 121
  • [50] Automating the Configuration of MapReduce: A Reinforcement Learning Scheme
    Mu, Ting-Yu
    Al-Fuqaha, Ala
    Salah, Khaled
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2020, 50 (11): : 4183 - 4196