Data-Efficient Hierarchical Reinforcement Learning

被引:0
|
作者
Nachum, Ofir [1 ]
Gu, Shixiang [1 ,2 ,3 ]
Lee, Honglak [1 ]
Levine, Sergey [1 ,4 ]
机构
[1] Google Brain, Mountain View, CA 94043 USA
[2] Univ Cambridge, Cambridge, England
[3] Max Planck Inst Intelligent Syst, Stuttgart, Germany
[4] Univ Calif Berkeley, Berkeley, CA USA
关键词
FRAMEWORK;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional reinforcement learning (RL) methods to solve more complex tasks. Yet, the majority of current HRL methods require careful task-specific design and on-policy training, making them difficult to apply in real-world scenarios. In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control. For generality, we develop a scheme where lower-level controllers are supervised with goals that are learned and proposed automatically by the higher-level controllers. To address efficiency, we propose to use off-policy experience for both higher- and lower-level training. This poses a considerable challenge, since changes to the lower-level behaviors change the action space for the higher-level policy, and we introduce an off-policy correction to remedy this challenge. This allows us to take advantage of recent advances in off-policy model-free RL to learn both higher- and lower-level policies using substantially fewer environment interactions than on-policy algorithms. We term the resulting HRL agent HIRO and find that it is generally applicable and highly sample-efficient. Our experiments show that HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations,(1) learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods, we find that our approach substantially outperforms previous state-of-the-art techniques.(2)
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Data-Efficient Hierarchical Reinforcement Learning for Robotic Assembly Control Applications
    Hou, Zhimin
    Fei, Jiajun
    Deng, Yuelin
    Xu, Jing
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2021, 68 (11) : 11565 - 11575
  • [2] Data-Efficient Reinforcement Learning for Malaria Control
    Zou, Lixin
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 507 - 513
  • [3] Pretraining Representations for Data-Efficient Reinforcement Learning
    Schwarzer, Max
    Rajkumar, Nitarshan
    Noukhovitch, Michael
    Anand, Ankesh
    Charlin, Laurent
    Hjelm, Devon
    Bachman, Philip
    Courville, Aaron
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] EqR: Equivariant Representations for Data-Efficient Reinforcement Learning
    Mondal, Arnab Kumar
    Jain, Vineet
    Siddiqi, Kaleem
    Ravanbakhsh, Siamak
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [5] Data-Efficient Reinforcement Learning for Variable Impedance Control
    Anand, Akhil S.
    Kaushik, Rituraj
    Gravdahl, Jan Tommy
    Abu-Dakka, Fares J.
    [J]. IEEE ACCESS, 2024, 12 : 15631 - 15641
  • [6] BarlowRL: Barlow Twins for Data-Efficient Reinforcement Learning
    Cagatan, Omer Veysel
    Akgun, Baris
    [J]. ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222, 2023, 222
  • [7] Data-Efficient Pipeline for Offline Reinforcement Learning with Limited Data
    Nie, Allen
    Flet-Berliac, Yannis
    Jordan, Deon R.
    Steenbergen, William
    Brunskill, Emma
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] Data-Efficient Offline Reinforcement Learning with Approximate Symmetries
    Angelotti, Giorgio
    Drougard, Nicolas
    Chanel, Caroline P. C.
    [J]. AGENTS AND ARTIFICIAL INTELLIGENCE, ICAART 2023, 2024, 14546 : 164 - 186
  • [9] Optimistic Sampling Strategy for Data-Efficient Reinforcement Learning
    Zhao, Dongfang
    Liu, Jiafeng
    Wu, Rui
    Cheng, Dansong
    Tang, Xianglong
    [J]. IEEE ACCESS, 2019, 7 : 55763 - 55769
  • [10] Concurrent Credit Assignment for Data-efficient Reinforcement Learning
    Dauce, Emmanuel
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,