Data-Efficient Hierarchical Reinforcement Learning

被引:0
|
作者
Nachum, Ofir [1 ]
Gu, Shixiang [1 ,2 ,3 ]
Lee, Honglak [1 ]
Levine, Sergey [1 ,4 ]
机构
[1] Google Brain, Mountain View, CA 94043 USA
[2] Univ Cambridge, Cambridge, England
[3] Max Planck Inst Intelligent Syst, Stuttgart, Germany
[4] Univ Calif Berkeley, Berkeley, CA USA
关键词
FRAMEWORK;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional reinforcement learning (RL) methods to solve more complex tasks. Yet, the majority of current HRL methods require careful task-specific design and on-policy training, making them difficult to apply in real-world scenarios. In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control. For generality, we develop a scheme where lower-level controllers are supervised with goals that are learned and proposed automatically by the higher-level controllers. To address efficiency, we propose to use off-policy experience for both higher- and lower-level training. This poses a considerable challenge, since changes to the lower-level behaviors change the action space for the higher-level policy, and we introduce an off-policy correction to remedy this challenge. This allows us to take advantage of recent advances in off-policy model-free RL to learn both higher- and lower-level policies using substantially fewer environment interactions than on-policy algorithms. We term the resulting HRL agent HIRO and find that it is generally applicable and highly sample-efficient. Our experiments show that HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations,(1) learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods, we find that our approach substantially outperforms previous state-of-the-art techniques.(2)
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Data-Efficient Reinforcement Learning in Continuous State-Action Gaussian-POMDPs
    McAllister, Rowan Thomas
    Rasmussen, Carl Edward
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [42] DATA-EFFICIENT DEEP REINFORCEMENT LEARNING WITH CONVOLUTION-BASED STATE ENCODER NETWORKS
    Fang, Qiang
    Xu, Xin
    Lan, Yixin
    Zhang, Yichuan
    Zeng, Yujun
    Tang, Tao
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION, 2021, 36
  • [43] Deep reinforcement learning for data-efficient weakly supervised business process anomaly detection
    Elaziz, Eman Abd
    Fathalla, Radwa
    Shaheen, Mohamed
    [J]. JOURNAL OF BIG DATA, 2023, 10 (01)
  • [44] Deep reinforcement learning for data-efficient weakly supervised business process anomaly detection
    Eman Abd Elaziz
    Radwa Fathalla
    Mohamed Shaheen
    [J]. Journal of Big Data, 10
  • [45] Data-efficient Learning of Morphology and Controller for a Microrobot
    Liao, Thomas
    Wang, Grant
    Yang, Brian
    Lee, Rene
    Pister, Kristofer
    Levine, Sergey
    Calandra, Roberto
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 2488 - 2494
  • [46] Data-efficient performance learning for configurable systems
    Jianmei Guo
    Dingyu Yang
    Norbert Siegmund
    Sven Apel
    Atrisha Sarkar
    Pavel Valov
    Krzysztof Czarnecki
    Andrzej Wasowski
    Huiqun Yu
    [J]. Empirical Software Engineering, 2018, 23 : 1826 - 1867
  • [47] Data-Efficient Task Generalization via Probabilistic Model-Based Meta Reinforcement Learning
    Bhardwaj, Arjun
    Rothfuss, Jonas
    Sukhija, Bhavya
    As, Yarden
    Hutter, Marco
    Coros, Stelian
    Krause, Andreas
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (04) : 3918 - 3925
  • [48] A data-efficient goal-directed deep reinforcement learning method for robot visuomotor skill
    Jiang, Rong
    Wang, Zhipeng
    He, Bin
    Zhou, Yanmin
    Li, Gang
    Zhu, Zhongpan
    [J]. NEUROCOMPUTING, 2021, 462 : 389 - 401
  • [49] Data-Efficient Deep Reinforcement Learning-Based Optimal Generation Control in DC Microgrids
    Fan, Zhen
    Zhang, Wei
    Liu, Wenxin
    [J]. IEEE SYSTEMS JOURNAL, 2024, 18 (01): : 426 - 437
  • [50] Data-efficient performance learning for configurable systems
    Guo, Jianmei
    Yang, Dingyu
    Siegmund, Norbert
    Apel, Sven
    Sarkar, Atrisha
    Valov, Pavel
    Czarnecki, Krzysztof
    Wasowski, Andrzej
    Yu, Huiqun
    [J]. EMPIRICAL SOFTWARE ENGINEERING, 2018, 23 (03) : 1826 - 1867