Toward Generalization of Automated Temporal Abstraction to Partially Observable Reinforcement Learning

被引:8
|
作者
Cilden, Erkin [1 ]
Polat, Faruk [1 ]
机构
[1] Middle E Tech Univ, Dept Comp Engn, TR-06531 Ankara, Turkey
关键词
Learning abstractions; partially observable Markov decision process (POMDP); reinforcement learning (RL);
D O I
10.1109/TCYB.2014.2352038
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Temporal abstraction for reinforcement learning (RL) aims to decrease learning time by making use of repeated sub-policy patterns in the learning task. Automatic extraction of abstractions during RL process is difficult but has many challenges such as dealing with the curse of dimensionality. Various studies have explored the subject under the assumption that the problem domain is fully observable by the learning agent. Learning abstractions for partially observable RL is a relatively less explored area. In this paper, we adapt an existing automatic abstraction method, namely extended sequence tree, originally designed for fully observable problems. The modified method covers a certain family of model-based partially observable RL settings. We also introduce belief state discretization methods that can be used with this new abstraction mechanism. The effectiveness of the proposed abstraction method is shown empirically by experimenting on well-known benchmark problems.
引用
收藏
页码:1414 / 1425
页数:12
相关论文
共 50 条
  • [1] Abstraction in Model Based Partially Observable Reinforcement Learning using Extended Sequence Trees
    Cilden, Erkin
    Polat, Faruk
    [J]. 2012 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY (WI-IAT 2012), VOL 2, 2012, : 348 - 355
  • [2] Learning Reward Machines for Partially Observable Reinforcement Learning
    Icarte, Rodrigo Toro
    Waldie, Ethan
    Klassen, Toryn Q.
    Valenzano, Richard
    Castro, Margarita P.
    McIlraith, Sheila A.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [3] Inverse Reinforcement Learning in Partially Observable Environments
    Choi, Jaedeug
    Kim, Kee-Eung
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 691 - 730
  • [4] Inverse Reinforcement Learning in Partially Observable Environments
    Choi, Jaedeug
    Kim, Kee-Eung
    [J]. 21ST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-09), PROCEEDINGS, 2009, : 1028 - 1033
  • [5] Abstraction and Generalization in Reinforcement Learning: A Summary and Framework
    Ponsen, Marc
    Taylor, Matthew E.
    Tuyls, Karl
    [J]. ADAPTIVE AND LEARNING AGENTS, 2010, 5924 : 1 - +
  • [6] Blockwise Sequential Model Learning for Partially Observable Reinforcement Learning
    Park, Giseung
    Choi, Sungho
    Sung, Youngchul
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7941 - 7948
  • [7] Learning reward machines: A study in partially observable reinforcement learning 
    Icarte, Rodrigo Toro
    Klassen, Toryn Q.
    Valenzano, Richard
    Castro, Margarita P.
    Waldie, Ethan
    Mcilraith, Sheila A.
    [J]. ARTIFICIAL INTELLIGENCE, 2023, 323
  • [8] Regret Minimization for Partially Observable Deep Reinforcement Learning
    Jin, Peter
    Keutzer, Kurt
    Levine, Sergey
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [9] Partially Observable Reinforcement Learning for Sustainable Active Surveillance
    Chen, Hechang
    Yang, Bo
    Liu, Jiming
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2018, PT II, 2018, 11062 : 425 - 437
  • [10] Learning of deterministic exploration and temporal abstraction in reinforcement learning
    Shibata, Katsunari
    [J]. 2006 SICE-ICASE International Joint Conference, Vols 1-13, 2006, : 2212 - 2217