A Reinforcement Learning Scheme of Fuzzy Rules with Reduced Conditions

被引:0
|
作者
Kawakami, Hiroshi [1 ]
Katai, Osamu [1 ]
Konishi, Tadataka [2 ]
机构
[1] Departent of Systems Science, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Kyoto,606-8501, Japan
[2] Department of Information Technology, Faculty of Engineering, Okayama University, 3-1-1 Tsushima-Naka, Okayama,700-8530, Japan
关键词
Fuzzy control - Fuzzy inference - Fuzzy rules;
D O I
10.20965/jaciii.2000.p0146
中图分类号
学科分类号
摘要
This paper proposes a new method of Q-leaming for the case where the states (conditions) and actions of systems are assumed to be continuous. The components of Q-tables are interpolated by fuzzy inference. The initial set of fuzzy rules is made of all combinations of conditions and actions relevant to the problem. Each rule is then associated with a value by which the Q-values of condition/action pairs are estimated. The values are revised by the Q-leaming algorithm so as to make the fuzzy rule system effective. Although this framework may require a huge number of the initial fuzzy rules, we will show that considerable reduction of the number can be done by adopting what we call Condition Reduced Fuzzy Rules (CRFR). The antecedent part of CRFR consists of all actions and the selected conditions, and its consequent is set to be its Q-value. Finally, experimental results show that controllers with CRFRs perform equally well to the system with the most detailed fuzzy control rules, while the total number of parameters that have to be revised through the whole learning process is considerably reduced, and the number of the revised parameters at each step of learning increased. © Fuji Technology Press Ltd. Creative Commons CC BY-ND: This is an Open Access article distributed under the terms of the Creative Commons Attribution-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nd/4.0/).
引用
收藏
页码:146 / 151
相关论文
共 50 条
  • [21] On convergence of fuzzy reinforcement learning
    Berenji, HR
    Vengerov, D
    10TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-3: MEETING THE GRAND CHALLENGE: MACHINES THAT SERVE PEOPLE, 2001, : 618 - 621
  • [22] Construction of dynamic fuzzy if-then rules through genetic reinforcement learning for temporal problems solving
    Juang, CF
    JOINT 9TH IFSA WORLD CONGRESS AND 20TH NAFIPS INTERNATIONAL CONFERENCE, PROCEEDINGS, VOLS. 1-5, 2001, : 2341 - 2346
  • [23] Fuzzy logic for reservoir operation with reduced rules
    Sivapragasam, C.
    Sugendran, P.
    Marimuthu, M.
    Seenivasakan, S.
    Vasudevan, G.
    ENVIRONMENTAL PROGRESS, 2008, 27 (01): : 98 - 103
  • [24] Learning fuzzy association rules and associative classification rules
    Han, Jianchao
    2006 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-5, 2006, : 1454 - 1459
  • [25] Fuzzy neural networks for learning fuzzy IF-THEN rules
    Kuo, RJ
    Wu, PC
    Wang, CP
    APPLIED ARTIFICIAL INTELLIGENCE, 2000, 14 (06) : 539 - 563
  • [26] Fuzzy Formation Control for Nonlinear Multiagent Systems With Two Time Scales: A Reinforcement Learning Scheme
    Yang, Qing
    Wang, Jing
    Shen, Hao
    Park, Ju H.
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2024, 32 (12) : 7190 - 7195
  • [27] Fuzzy OLAP association rules mining-based modular reinforcement learning approach for multiagent systems
    Kaya, M
    Alhajj, R
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2005, 35 (02): : 326 - 338
  • [28] Interactive multiagent reinforcement learning with motivation rules
    Yamaguchi, T
    Marukawa, R
    ICCIMA 2001: FOURTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND MULTIMEDIA APPLICATIONS, PROCEEDINGS, 2001, : 128 - 132
  • [29] Classifying Continuous Classes with Reinforcement Learning RULES
    ElGibreen, Hebah
    Aksoy, Mehmet Sabih
    Intelligent Information and Database Systems, Pt II, 2015, 9012 : 116 - 127
  • [30] LEARNING RULES FOR A FUZZY INFERENCE MODEL
    DECAMPOS, LM
    MORAL, S
    FUZZY SETS AND SYSTEMS, 1993, 59 (03) : 247 - 257