A reinforcement learning with condition reduced fuzz rules

被引:0
|
作者
Kawakami, H
Katai, O
Konishi, T
机构
[1] Kyoto Univ, Kyoto 6068501, Japan
[2] Okayama Univ, Okayama 7008530, Japan
来源
关键词
Q-learning; fuzzy rule; interpolation; reduced condition;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a new Q-learning method for the case where the states (conditions) and actions of systems are assumed to be continuous. The components of Q-tables are interpolated by fuzzy inference. The initial set of fuzzy rules is made of all the combinations of conditions and actions relevant to the problem. Each rule is then associated with a value by which the Q-value of a condition/action pair is estimated. The values are revised by the Q-learning algorithm so as to make the fuzzy rule system effective. Although this framework may require a huge number of the initial fuzzy rules, we will show that considerable reduction can be done by using what we call "Condition Reduced Fuzzy Rules (CRFR)". The antecedent part of CRFR consists of all the actions and the selected conditions, and its consequent is set to be its Q-value. Finally, experimental results show that controllers with CRFRs perform equivalently to the system with the most detailed fuzzy control rules, while the total number of parameters that have to be revised through the whole learning process is reduced and the number of the revised parameters at each step of learning is increased.
引用
收藏
页码:198 / 205
页数:8
相关论文
共 50 条
  • [1] A Reinforcement Learning Scheme of Fuzzy Rules with Reduced Conditions
    Kawakami, Hiroshi
    Katai, Osamu
    Konishi, Tadataka
    Journal of Advanced Computational Intelligence and Intelligent Informatics, 2000, 4 (02) : 146 - 151
  • [2] An application of decision rules in reinforcement learning
    Michalski, A
    CONTROL AND CYBERNETICS, 2000, 29 (04): : 989 - 996
  • [3] Reinforcement learning rules in a repeated game
    Bell A.M.
    Computational Economics, 2001, 18 (01) : 89 - 110
  • [4] Refining linear fuzzy rules by reinforcement learning
    Berenji, HR
    Khedkar, PS
    Malkani, A
    FUZZ-IEEE '96 - PROCEEDINGS OF THE FIFTH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-3, 1996, : 1750 - 1756
  • [5] Interactive multiagent reinforcement learning with motivation rules
    Yamaguchi, T
    Marukawa, R
    ICCIMA 2001: FOURTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND MULTIMEDIA APPLICATIONS, PROCEEDINGS, 2001, : 128 - 132
  • [6] Classifying Continuous Classes with Reinforcement Learning RULES
    ElGibreen, Hebah
    Aksoy, Mehmet Sabih
    Intelligent Information and Database Systems, Pt II, 2015, 9012 : 116 - 127
  • [7] Comparing Reinforcement Learning and Human Learning With the Game of Hidden Rules
    Pulick, Eric M.
    Menkov, Vladimir
    Mintz, Yonatan D.
    Kantor, Paul B.
    Bier, Vicki M.
    IEEE ACCESS, 2024, 12 : 65362 - 65372
  • [8] Reinforcement learning of simplex pivot rules: a proof of concept
    Suriyanarayana, Varun
    Tavaslioglu, Onur
    Patel, Ankit B.
    Schaefer, Andrew J.
    OPTIMIZATION LETTERS, 2022, 16 (08) : 2513 - 2525
  • [9] BASED ON THE REINFORCEMENT LEARNING ASSOCIATION RULES RECOMMENDATION STUDY
    Wang, Jinqiao
    Yang, Qing
    Sun, JunLi
    Zhu, Li
    PROCEEDINGS OF THE 2010 INTERNATIONAL CONFERENCE ON MECHANICAL, INDUSTRIAL, AND MANUFACTURING TECHNOLOGIES (MIMT 2010), 2010, : 125 - 130
  • [10] Based on the Reinforcement learning Association Rules Recommendation study
    Wang, Jinqiao
    Yang, Qing
    Zhu, Li
    Sun, JunLi
    2009 FIFTH INTERNATIONAL CONFERENCE ON SEMANTICS, KNOWLEDGE AND GRID (SKG 2009), 2009, : 392 - 395