Combined Reinforcement Learning via Abstract Representations

被引:0
|
作者
Francois-Lavet, Vincent [1 ,2 ]
Bengio, Yoshua [2 ,3 ]
Precup, Doina [1 ,2 ,4 ]
Pineau, Joelle [1 ,2 ,5 ]
机构
[1] McGill Univ, Montreal, PQ, Canada
[2] Mila, Montreal, PQ, Canada
[3] Univ Montreal, Montreal, PQ, Canada
[4] DeepMind, Montreal, PQ, Canada
[5] Facebook AI Res, Montreal, PQ, Canada
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the quest for efficient and robust reinforcement learning methods, both model-free and model-based approaches offer advantages. In this paper we propose a new way of explicitly bridging both approaches via a shared low-dimensional learned encoding of the environment, meant to capture summarizing abstractions. We show that the modularity brought by this approach leads to good generalization while being computationally efficient, with planning happening in a smaller latent state space. In addition, this approach recovers a sufficient low-dimensional representation of the environment, which opens up new strategies for interpretable AI, exploration and transfer learning.
引用
收藏
页码:3582 / 3589
页数:8
相关论文
共 50 条
  • [1] Learning Symbolic Rules over Abstract Meaning Representations for Textual Reinforcement Learning
    Chaudhury, Subhajit
    Swaminathan, Sarathkrishna
    Kimura, Daiki
    Sen, Prithviraj
    Murugesan, Keerthiram
    Uceda-Sosa, Rosario
    Tatsubori, Michiaki
    Fokoue, Achille
    Kapanipathi, Pavan
    Munawar, Asim
    Gray, Alexander
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 6764 - 6776
  • [2] Reinforcement Learning Explainability via Model Transforms (Student Abstract)
    Finkelstein, Mira
    Liu, Lucy
    Kolumbus, Yoav
    Parkes, David C.
    Rosenshein, Jeffrey S.
    Keren, Sarah
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 12943 - 12944
  • [3] Learning Abstract Task Representations
    Meskhi, Mikhail M.
    Rivolli, Adriano
    Mantovani, Rafael G.
    Vilalta, Ricardo
    [J]. AAAI WORKSHOP ON META-LEARNING AND METADL CHALLENGE, VOL 140, 2021, 140 : 127 - 137
  • [4] Learning Robust Rule Representations for Abstract Reasoning via Internal Inferences
    Zhang, Wenbo
    Tang, Likai
    Mo, Site
    Song, Sen
    Liu, Xianggen
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] How abstract is more abstract? Learning abstract underlying representations
    O'Hara, Charlie
    [J]. PHONOLOGY, 2017, 34 (02) : 325 - 345
  • [6] Learning Action Representations for Reinforcement Learning
    Chandak, Yash
    Theocharous, Georgios
    Kostas, James E.
    Jordan, Scott M.
    Thomas, Philip S.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [7] Learning task-relevant representations via rewards and real actions for reinforcement learning
    Yuan, Linghui
    Lu, Xiaowei
    Liu, Yunlong
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 294
  • [8] Reinforcement Learning with Prototypical Representations
    Yarats, Denis
    Fergus, Rob
    Lazaric, Alessandro
    Pinto, Lerrel
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [9] Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning
    Yuan, Haoqi
    Lu, Zongqing
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [10] Graph Representations for Reinforcement Learning
    Schab, Esteban
    Casanova, Carlos
    Piccoli, Fabiana
    [J]. JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY, 2024, 24 (01): : 29 - 38