Learning from Demonstration without Demonstrations

被引:1
|
作者
Blau, Tom [1 ,2 ]
Morere, Philippe [1 ]
Francis, Gilad [1 ]
机构
[1] Univ Sydney, Sch Comp Sci, Sydney, NSW, Australia
[2] CSIRO, Canberra, ACT, Australia
关键词
D O I
10.1109/ICRA48506.2021.9561119
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
State-of-the-art reinforcement learning (RL) algorithms suffer from high sample complexity, particularly in the sparse reward case. A popular strategy for mitigating this problem is to learn control policies by imitating a set of expert demonstrations. The drawback of such approaches is that an expert needs to produce demonstrations, which may be costly in practice. To address this shortcoming, we propose Probabilistic Planning for Demonstration Discovery (P2D2), a technique for automatically discovering demonstrations without access to an expert. We formulate discovering demonstrations as a search problem and leverage widely-used planning algorithms such as Rapidly-exploring Random Tree to find demonstration trajectories. These demonstrations are used to initialize a policy, then refined by a generic RL algorithm. We provide theoretical guarantees of P2D2 finding successful trajectories, as well as bounds for its sampling complexity. We experimentally demonstrate the method outperforms classic and intrinsic exploration RL techniques in a range of classic control and robotics tasks, requiring only a fraction of exploration samples and achieving better asymptotic performance.
引用
收藏
页码:4116 / 4122
页数:7
相关论文
共 50 条
  • [1] Learning to Manipulate Deformable Objects without Demonstrations
    Wu, Yilin
    Yan, Wilson
    Kurutach, Thanard
    Pinto, Lerrel
    Abbeel, Pieter
    [J]. ROBOTICS: SCIENCE AND SYSTEMS XVI, 2020,
  • [2] Learning Latent Actions without Human Demonstrations
    Mehta, Shaunak A.
    Parekh, Sagar
    Losey, Dylan P.
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7437 - 7443
  • [3] Learning From Sparse Demonstrations
    Jin, Wanxin
    Murphey, Todd D.
    Kulic, Dana
    Ezer, Neta
    Mou, Shaoshuai
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (01) : 645 - 664
  • [4] Learning to Generalize from Demonstrations
    Browne, Katie
    Nicolescu, Monica
    [J]. CYBERNETICS AND INFORMATION TECHNOLOGIES, 2012, 12 (03) : 27 - 38
  • [5] Learning from Corrective Demonstrations
    Gutierrez, Reymundo A.
    Short, Elaine Schaertl
    Niekum, Scott
    Thomaz, Andrea L.
    [J]. HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2019, : 712 - 714
  • [6] Balsa: Learning a Query Optimizer Without Expert Demonstrations
    Yang, Zongheng
    Chiang, Wei-Lin
    Luan, Sifei
    Mittal, Gautam
    Luo, Michael
    Stoica, Ion
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 931 - 944
  • [7] Learning Multimodal Contact-Rich Skills from Demonstrations Without Reward Engineering
    Balakuntala, Mythra, V
    Kaur, Upinder
    Ma, Xin
    Wachs, Juan
    Voyles, Richard M.
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 4679 - 4685
  • [8] Learning strategies in sustainable energy demonstration projects: What organizations learn from sustainable energy demonstrations
    Bossink, Bart
    [J]. RENEWABLE & SUSTAINABLE ENERGY REVIEWS, 2020, 131
  • [9] Learning Options for an MDP from Demonstrations
    Tamassia, Marco
    Zambetta, Fabio
    Raffe, William
    Li, Xiaodong
    [J]. ARTIFICIAL LIFE AND COMPUTATIONAL INTELLIGENCE, 2015, 8955 : 226 - 242
  • [10] FabricFolding: learning efficient fabric folding without expert demonstrations
    He, Can
    Meng, Lingxiao
    Sun, Zhirui
    Wang, Jiankun
    Meng, Max Q. -H.
    [J]. ROBOTICA, 2024, 42 (04) : 1281 - 1296