Goal Space Abstraction in Hierarchical Reinforcement Learning via Set-Based Reachability Analysis

被引:0
|
作者
Zadem, Mehdi [1 ]
Mover, Sergio [1 ]
Nguyen, Sao Mai [2 ,3 ]
机构
[1] Ecole Polytech, Inst Polytech Paris, CNRS, LIX, Paris, France
[2] ENSTA Paris, Inst Polytech Paris, U2IS, Flowers Team, Paris, France
[3] IMT Atlantique, Lab STICC, Brest, France
关键词
D O I
10.1109/ICDL55364.2023.10364473
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Open-ended learning benefits immensely from the use of symbolic methods for goal representation as they offer ways to structure knowledge for efficient and transferable learning. However, the existing Hierarchical Reinforcement Learning (HRL) approaches relying on symbolic reasoning are often limited as they require a manual goal representation. The challenge in autonomously discovering a symbolic goal representation is that it must preserve critical information, such as the environment dynamics. In this paper, we propose a developmental mechanism for goal discovery via an emergent representation that abstracts (i.e., groups together) sets of environment states that have similar roles in the task. We introduce a Feudal HRL algorithm that concurrently learns both the goal representation and a hierarchical policy. The algorithm uses symbolic reachability analysis for neural networks to approximate the transition relation among sets of states and to refine the goal representation. We evaluate our approach on complex navigation tasks, showing the learned representation is interpretable, transferrable and results in data efficient learning.
引用
收藏
页码:423 / 428
页数:6
相关论文
共 50 条
  • [1] JuliaReach: a Toolbox for Set-Based Reachability
    Bogomolov, Sergiy
    Forets, Marcelo
    Frehse, Goran
    Potomkin, Kostiantyn
    Schilling, Christian
    [J]. PROCEEDINGS OF THE 2019 22ND ACM INTERNATIONAL CONFERENCE ON HYBRID SYSTEMS: COMPUTATION AND CONTROL (HSCC '19), 2019, : 39 - 44
  • [2] Hierarchical Reinforcement Learning from Demonstration via Reachability-Based Reward Shaping
    Gao, Xiaozhu
    Liu, Jinhui
    Wan, Bo
    An, Lingling
    [J]. NEURAL PROCESSING LETTERS, 2024, 56 (03)
  • [3] Verifying SeVeCom Using Set-based Abstraction
    Modersheim, Sebastian
    Modesti, Paolo
    [J]. 2011 7TH INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING CONFERENCE (IWCMC), 2011, : 1164 - 1169
  • [4] Goal-Conditioned Reinforcement Learning With Disentanglement-Based Reachability Planning
    Qian, Zhifeng
    You, Mingyu
    Zhou, Hongjun
    Xu, Xuanhui
    He, Bin
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (08): : 4721 - 4728
  • [5] Hierarchical extreme learning machine based reinforcement learning for goal localization
    AlDahoul, Nouar
    Htike, Zaw Zaw
    Akmeliawati, Rini
    [J]. 3RD INTERNATIONAL CONFERENCE ON MECHANICAL, AUTOMOTIVE AND AEROSPACE ENGINEERING 2016, 2017, 184
  • [6] State abstraction in MAXQ hierarchical reinforcement learning
    Dietterich, TG
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 12, 2000, 12 : 994 - 1000
  • [7] Language as an Abstraction for Hierarchical Deep Reinforcement Learning
    Jiang, Yiding
    Gu, Shixiang
    Murphy, Kevin
    Finn, Chelsea
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [8] Safe Reinforcement Learning for Autonomous Lane Changing Using Set-Based Prediction
    Krasowski, Hanna
    Wang, Xiao
    Althoff, Matthias
    [J]. 2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [9] Hierarchical Reinforcement Learning Based on Continuous Subgoal Space
    Wang, Chen
    Zeng, Fanyu
    Ge, Shuzhi Sam
    Jiang, Xin
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON REAL-TIME COMPUTING AND ROBOTICS (IEEE-RCAR 2020), 2020, : 74 - 80
  • [10] A set-based approach for hierarchical optimization problem using Bayesian active learning
    Shintani, Kohei
    Sugai, Tomotaka
    Yamada, Takayuki
    [J]. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, 2023, 124 (10) : 2196 - 2214