Granular computing in actor-critic learning

被引:0
|
作者
Peters, James F. [1 ]
机构
[1] Univ Manitoba, Dept Elect & Comp Engn, Winnipeg, MB R3T 5V6, Canada
关键词
D O I
10.1109/FOCI.2007.372148
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The problem considered in this paper is how to guide actor-critic learning based on information granules that reflect knowledge about acceptable behavior patterns. The solution to this problem stems from approximation spaces, which were introduced by Zazislaw Pawlak starting in the early 1980s and which provide a basis for perception of objects that are imperfectly known. It was also observed by Ewa Orlowska in 1982 that approximation spaces serve as a formal counterpart of perception, or observation. In our case, approximation spaces provide a ground for deriving pattern-based behaviours as well as information granules that can be used to influence the policy structure of an actor in a beneficial way. This paper includes the results of a recent study of swarm behavior by collections of biologically-inspired bots carried out in the context of an artificial ecosystem. This ecosystem has an ethological basis that makes it possible to observe and explain the behavior of biological organisms that carries over into the study of actor-critic learning by interacting robotic devices. The contribution of this article is a framework for actor-critic learning defined in the context of approximation spaces and information granulation.
引用
收藏
页码:59 / 64
页数:6
相关论文
共 50 条
  • [21] Reinforcement learning with actor-critic for knowledge graph reasoning
    Zhang, Linli
    Li, Dewei
    Xi, Yugeng
    Jia, Shuai
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2020, 63 (06)
  • [22] Reinforcement learning with actor-critic for knowledge graph reasoning
    Linli Zhang
    Dewei Li
    Yugeng Xi
    Shuai Jia
    [J]. Science China Information Sciences, 2020, 63
  • [23] A Sandpile Model for Reliable Actor-Critic Reinforcement Learning
    Peng, Yiming
    Chen, Gang
    Zhang, Mengjie
    Pang, Shaoning
    [J]. 2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 4014 - 4021
  • [24] Actor-critic reinforcement learning for bidding in bilateral negotiation
    Arslan, Furkan
    Aydogan, Reyhan
    [J]. TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2022, 30 (05) : 1695 - 1714
  • [25] Actor-Critic Reinforcement Learning for Tracking Control in Robotics
    Pane, Yudha P.
    Nageshrao, Subramanya P.
    Babuska, Robert
    [J]. 2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 5819 - 5826
  • [26] Visual Navigation with Actor-Critic Deep Reinforcement Learning
    Shao, Kun
    Zhao, Dongbin
    Zhu, Yuanheng
    Zhang, Qichao
    [J]. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [27] An Actor-Critic Algorithm With Second-Order Actor and Critic
    Wang, Jing
    Paschalidis, Ioannis Ch.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (06) : 2689 - 2703
  • [28] Reinforcement learning with actor-critic for knowledge graph reasoning
    Linli ZHANG
    Dewei LI
    Yugeng XI
    Shuai JIA
    [J]. Science China(Information Sciences), 2020, 63 (06) : 223 - 225
  • [29] Actor-Critic Reinforcement Learning for Control With Stability Guarantee
    Han, Minghao
    Zhang, Lixian
    Wang, Jun
    Pan, Wei
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6217 - 6224
  • [30] Distributed Actor-Critic Learning Using Emphatic Weightings
    Stankovic, Milos S.
    Beko, Marko
    Stankovic, Srdjan S.
    [J]. 2022 8TH INTERNATIONAL CONFERENCE ON CONTROL, DECISION AND INFORMATION TECHNOLOGIES (CODIT'22), 2022, : 1167 - 1172