Learning Generalizable and Composable Abstractions for Transfer in Reinforcement Learning

被引:0
|
作者
Nayyar, Rashmeet Kaur [1 ]
机构
[1] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ 85281 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement Learning (RL) in complex environments presents many challenges: agents require learning concise representations of both environments and behaviors for efficient reasoning and generalizing experiences to new, unseen situations. However, RL approaches can be sample-inefficient and difficult to scale, especially in long-horizon sparse reward settings. To address these issues, the goal of my doctoral research is to develop methods that automatically construct semantically meaningful state and temporal abstractions for efficient transfer and generalization. In my work, I develop hierarchical approaches for learning transferable, generalizable knowledge in the form of symbolically represented options, as well as for integrating search techniques with RL to solve new problems by efficiently composing the learned options. Empirical results show that the resulting approaches effectively learn and transfer knowledge, achieving superior sample efficiency compared to SOTA methods while also enhancing interpretability.
引用
收藏
页码:23403 / 23404
页数:2
相关论文
共 50 条
  • [1] Composable Modular Reinforcement Learning
    Simpkins, Christopher
    Isbell, Charles
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 4975 - 4982
  • [2] Robustly Learning Composable Options in Deep Reinforcement Learning
    Bagaria, Akhil
    Senthil, Jason
    Slivinski, Matthew
    Konidaris, George
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2161 - 2169
  • [3] Learning Generalizable Locomotion Skills with Hierarchical Reinforcement Learning
    Li, Tianyu
    Lambert, Nathan
    Calandra, Roberto
    Meier, Franziska
    Rai, Akshara
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 413 - 419
  • [4] Learning Markov State Abstractions for Deep Reinforcement Learning
    Allen, Cameron
    Parikh, Neev
    Gottesman, Omer
    Konidaris, George
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [5] When Is Generalizable Reinforcement Learning Tractable?
    Malik, Dhruv
    Li, Yuanzhi
    Ravikumar, Pradeep
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [6] State Abstractions for Lifelong Reinforcement Learning
    Abel, David
    Arumugam, Dilip
    Lehnert, Lucas
    Littman, Michael L.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [7] RLlib: Abstractions for Distributed Reinforcement Learning
    Liang, Eric
    Liaw, Richard
    Moritz, Philipp
    Nishihara, Robert
    Fox, Roy
    Goldberg, Ken
    Gonzalez, Joseph E.
    Jordan, Michael, I
    Stoica, Ion
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [8] Composable Deep Reinforcement Learning for Robotic Manipulation
    Haarnoja, Tuomas
    Pong, Vitchyr
    Zhou, Aurick
    Dalal, Murtaza
    Abbeel, Pieter
    Levine, Sergey
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 6244 - 6251
  • [9] A Composable Specification Language for Reinforcement Learning Tasks
    Jothimurugan, Kishor
    Alur, Rajeev
    Bastani, Osbert
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] Graph learning-based generation of abstractions for reinforcement learning
    Xue, Yuan
    Kudenko, Daniel
    Khosla, Megha
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023,