Temporal Abstraction in Reinforcement Learning with the Successor Representation

被引:0
|
作者
Machado, Marlos C. [1 ]
Barreto, Andre [2 ]
Precup, Doina [3 ]
Bowling, Michael [1 ]
机构
[1] Univ Alberta, Alberta Machine Intelligence Inst Amii, Dept Comp Sci, DeepMind, Edmonton, AB, Canada
[2] DeepMind, London, England
[3] McGill Univ, Quebec AI Inst Mila, Sch Comp Sci, DeepMind, Montreal, PQ, Canada
关键词
Reinforcement learning; Options; Successor representation; Eigenoptions; Covering options; Option keyboard; Temporally-extended exploration; SLOW FEATURE ANALYSIS; EXPLORATION; FRAMEWORK; LEVEL; MDPS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reasoning at multiple levels of temporal abstraction is one of the key attributes of intelligence. In reinforcement learning, this is often modeled through temporally extended courses of actions called options. Options allow agents to make predictions and to operate at different levels of abstraction within an environment. Nevertheless, approaches based on the options framework often start with the assumption that a reasonable set of options is known beforehand. When this is not the case, there are no definitive answers for which op-tions one should consider. In this paper, we argue that the successor representation, which encodes states based on the pattern of state visitation that follows them, can be seen as a natural substrate for the discovery and use of temporal abstractions. To support our claim, we take a big picture view of recent results, showing how the successor representation can be used to discover options that facilitate either temp orally-extended exploration or planning. We cast these results as instantiations of a general framework for option discovery in which the agent's representation is used to identify useful options, which are then used to further improve its representation. This results in a virtuous, never-ending, cycle in which both the representation and the options are constantly refined based on each other. Beyond option discovery itself, we also discuss how the successor representation allows us to augment a set of options into a combinatorially large counterpart without additional learning. This is achieved through the combination of previously learned options. Our empirical evaluation focuses on options discovered for temp orally-extended exploration and on the use of the successor representation to combine them. Our results shed light on important design deci-sions involved in the definition of options and demonstrate the synergy of different methods based on the successor representation, such as eigenoptions and the option keyboard.
引用
收藏
页数:69
相关论文
共 50 条
  • [1] The successor representation in human reinforcement learning
    Momennejad, I.
    Russek, E. M.
    Cheong, J. H.
    Botvinick, M. M.
    Daw, N. D.
    Gershman, S. J.
    [J]. NATURE HUMAN BEHAVIOUR, 2017, 1 (09): : 680 - 692
  • [2] The successor representation in human reinforcement learning
    I. Momennejad
    E. M. Russek
    J. H. Cheong
    M. M. Botvinick
    N. D. Daw
    S. J. Gershman
    [J]. Nature Human Behaviour, 2017, 1 : 680 - 692
  • [3] Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation
    Salimibeni, Mohammad
    Mohammadi, Arash
    Malekzadeh, Parvin
    Plataniotis, Konstantinos N.
    [J]. SENSORS, 2022, 22 (04)
  • [4] Learning of deterministic exploration and temporal abstraction in reinforcement learning
    Shibata, Katsunari
    [J]. 2006 SICE-ICASE International Joint Conference, Vols 1-13, 2006, : 2212 - 2217
  • [6] The Successor Representation and Temporal Context
    Gershman, Samuel J.
    Moore, Christopher D.
    Todd, Michael T.
    Norman, Kenneth A.
    Sederberg, Per B.
    [J]. NEURAL COMPUTATION, 2012, 24 (06) : 1553 - 1568
  • [7] A Deep Reinforcement Learning Approach to Marginalized Importance Sampling with the Successor Representation
    Fujimoto, Scott
    Meger, David
    Precup, Doina
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [8] Temporal Alignment for History Representation in Reinforcement Learning
    Ermolov, Aleksandr
    Sangineto, Enver
    Sebe, Nicu
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2172 - 2178
  • [9] PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
    Filos, Angelos
    Lyle, Clare
    Gal, Yarin
    Levine, Sergey
    Jaques, Natasha
    Farquhar, Gregory
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [10] Successor Features for Transfer in Reinforcement Learning
    Barreto, Andre
    Dabney, Will
    Munos, Remi
    Hunt, Jonathan J.
    Schaul, Tom
    van Hasselt, Hado
    Silver, David
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30