Learning classifier systems from a reinforcement learning perspective

被引:31
|
作者
P. L. Lanzi
机构
[1] Dipartimento di Elettronica ed Informazione,
[2] Politecnico di Milano,undefined
[3] Piazza Leonardo da Vinci n. 32,undefined
[4] 1-20133 Milano e-mail: pierluca.lanzi@polimi.it,undefined
关键词
Keywords Genetic algorithms, Reinforcement learning, XCS, Q-learning;
D O I
10.1007/s005000100113
中图分类号
学科分类号
摘要
 We analyze learning classifier systems in the light of tabular reinforcement learning. We note that although genetic algorithms are the most distinctive feature of learning classifier systems, it is not clear whether genetic algorithms are important to learning classifiers systems. In fact, there are models which are strongly based on evolutionary computation (e.g., Wilson's XCS) and others which do not exploit evolutionary computation at all (e.g., Stolzmann's ACS). To find some clarifications, we try to develop learning classifier systems “from scratch”, i.e., starting from one of the most known reinforcement learning technique, Q-learning. We first consider thebasics of reinforcement learning: a problem modeled as a Markov decision process and tabular Q-learning. We introduce a formal framework to define a general purpose rule-based representation which we use to implement tabular Q-learning. We formally define generalization within rules and discuss the possible approaches to extend our rule-based Q-learning with generalization capabilities. We suggest that genetic algorithms are probably the most general approach for adding generalization although they might be not the only solution.
引用
收藏
页码:162 / 170
页数:8
相关论文
共 50 条
  • [1] Anticipatory Learning Classifier Systems and Factored Reinforcement Learning
    Sigaud, Olivier
    Butz, Martin V.
    Kozlova, Olga
    Meyer, Christophe
    [J]. ANTICIPATORY BEHAVIOR IN ADAPTIVE LEARNING SYSTEMS: FROM PSYCHOLOGICAL THEORIES TO ARTIFICIAL COGNITIVE SYSTEMS, 2009, 5499 : 321 - +
  • [2] Pittsburgh Learning Classifier Systems for Explainable Reinforcement Learning: Comparing with XCS
    Bishop, Jordan T.
    Gallagher, Marcus
    Browne, Will N.
    [J]. PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'22), 2022, : 323 - 331
  • [3] Reinforcement Learning in Continuous Spaces by Using Learning Fuzzy Classifier Systems
    Chen, Gang
    Douch, Colin
    Zhang, Mengjie
    Pang, Shaoning
    [J]. NEURAL INFORMATION PROCESSING, PT II, 2015, 9490 : 320 - 328
  • [4] Analyzing Strength-Based Classifier System from Reinforcement Learning Perspective
    Wada, Atsushi
    Takadama, Keiki
    [J]. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2009, 13 (06) : 631 - 639
  • [5] Two classifier systems for reinforcement learning of motion patterns
    Yamada, K
    Svinin, M
    Ohkura, K
    Hosoe, S
    Ueda, K
    [J]. MOBILE ROBOT TECHNOLOGY, PROCEEDINGS, 2001, : 1 - 6
  • [6] Comparing Reinforcement Learning algorithms applied to crisp and fuzzy Learning Classifier Systems
    Bonarini, A
    [J]. GECCO-99: PROCEEDINGS OF THE GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 1999, : 52 - 59
  • [7] REINFORCEMENT LEARNING WITH CLASSIFIER SYSTEMS - ADAPTIVE DEFAULT HIERARCHY FORMATION
    SMITH, RE
    GOLDBERG, DE
    [J]. APPLIED ARTIFICIAL INTELLIGENCE, 1992, 6 (01) : 79 - 102
  • [8] An action-oriented perspective of learning in classifier systems
    Weiss, G
    [J]. JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE, 1996, 8 (01) : 43 - 62
  • [9] Learning cooperation from classifier systems
    Tran, TH
    Sanza, C
    Duthen, Y
    [J]. COMPUTATIONAL INTELLIGENCE AND SECURITY, PT 1, PROCEEDINGS, 2005, 3801 : 329 - 336
  • [10] Robot Reinforcement Learning Based on Learning Classifier System
    Shao, Jie
    Yang, Jing-yu
    [J]. ADVANCED INTELLIGENT COMPUTING THEORIES AND APPLICATIONS, 2010, 93 : 200 - 207