Reward Learning from Narrated Demonstrations

被引:3
|
作者
Tung, Hsiao-Yu [1 ]
Harley, Adam W. [1 ]
Huang, Liang-Kang [1 ]
Fragkiadaki, Katerina [1 ]
机构
[1] Carnegie Mellon Univ, 5000 Forbes Ave, Pittsburgh, PA 15213 USA
关键词
D O I
10.1109/CVPR.2018.00732
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans effortlessly " program" one another by communicating goals and desires in natural language. In contrast, humans program robotic behaviours by indicating desired object locations and poses to be achieved , by providing RGB images of goal configurations or supplying a demonstration to be imitated . None of these methods generalize across environment variations, and they convey the goal in awkward technical terms. This work proposes joint learning of natural language grounding and instructable behavioural policies reinforced by perceptual detectors of natural language expressions, grounded to the sensory inputs of the robotic agent. Our supervision is narrated visual demonstrations (NVD), which are visual demonstrations paired with verbal narration (as opposed to being silent). We introduce a dataset of NVD where teachers perform activities while describing them in detail. We map the teachers' descriptions to perceptual reward detectors, and use them to train corresponding behavioural policies in simulation. We empirically show that our instructable agents (i) learn visual reward detectors using a small number of examples by exploiting hard negative mined configurations from demonstration dynamics, (ii) develop pick- and- place policies using learned visual reward detectors, (iii) benefit from object- factorized state representations that mimic the syntactic structure of natural language goal expressions, and (iv) can execute behaviours that involve novel objects in novel locations at test time, instructed by natural language.
引用
收藏
页码:7004 / 7013
页数:10
相关论文
共 50 条
  • [1] Reward Learning From Very Few Demonstrations
    Eteke, Cem
    Kebude, Dogancan
    Akgun, Baris
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (03) : 893 - 904
  • [2] Reward learning from human preferences and demonstrations in Atari
    Ibarz, Borja
    Leike, Jan
    Pohlen, Tobias
    Irving, Geoffrey
    Legg, Shane
    Amodei, Dario
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] Reward Learning from Suboptimal Demonstrations with Applications in Surgical Electrocautery
    Karimi, Zohre
    Ho, Shing-Hei
    Thach, Bao
    Kuntz, Alan
    Brown, Daniel S.
    [J]. 2024 INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS, ISMR 2024, 2024,
  • [4] Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution
    Patil, Vihang
    Hofmarcher, Markus
    Dinu, Marius-Constantin
    Dorfer, Matthias
    Blies, Patrick
    Brandstetter, Johannes
    Arjona-Medina, Jose
    Hochreiter, Sepp
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [5] Identifying Reusable Primitives in Narrated Demonstrations
    Mohseni-Kabir, Anahita
    Chernova, Sonia
    Rich, Charles
    [J]. ELEVENTH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN ROBOT INTERACTION (HRI'16), 2016, : 479 - 480
  • [6] Deep Reward Shaping from Demonstrations
    Hussein, Ahmed
    Elyan, Eyad
    Gaber, Mohamed Medhat
    Jayne, Chrisina
    [J]. 2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 510 - 517
  • [7] Model-based Adversarial Imitation Learning from Demonstrations and Human Reward
    Huang, Jie
    Hao, Jiangshan
    Juan, Rongshun
    Gomez, Randy
    Nakamura, Keisuke
    Li, Guangliang
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1683 - 1690
  • [8] Learning Reward Functions by Integrating Human Demonstrations and Preferences
    Palan, Malayandi
    Shevchuk, Gleb
    Landolfi, Nicholas C.
    Sadigh, Dorsa
    [J]. ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [9] Learning to Run with Potential-Based Reward Shaping and Demonstrations from Video Data
    Malysheva, Aleksandra
    Kudenko, Daniel
    Shpilman, Aleksei
    [J]. 2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2018, : 286 - 291
  • [10] DROID: Learning from Offline Heterogeneous Demonstrations via Reward-Policy Distillation
    Jayanthi, Sravan
    Chen, Letian
    Balabanska, Nadya
    Duong, Van
    Scarlatescu, Erik
    Ameperosa, Ezra
    Zaidi, Zulfiqar
    Martin, Daniel
    Del Matto, Taylor
    Ono, Masahiro
    Gombolay, Matthew
    [J]. CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229