Reward Learning from Narrated Demonstrations

被引:4
|
作者
Tung, Hsiao-Yu [1 ]
Harley, Adam W. [1 ]
Huang, Liang-Kang [1 ]
Fragkiadaki, Katerina [1 ]
机构
[1] Carnegie Mellon Univ, 5000 Forbes Ave, Pittsburgh, PA 15213 USA
关键词
D O I
10.1109/CVPR.2018.00732
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans effortlessly " program" one another by communicating goals and desires in natural language. In contrast, humans program robotic behaviours by indicating desired object locations and poses to be achieved , by providing RGB images of goal configurations or supplying a demonstration to be imitated . None of these methods generalize across environment variations, and they convey the goal in awkward technical terms. This work proposes joint learning of natural language grounding and instructable behavioural policies reinforced by perceptual detectors of natural language expressions, grounded to the sensory inputs of the robotic agent. Our supervision is narrated visual demonstrations (NVD), which are visual demonstrations paired with verbal narration (as opposed to being silent). We introduce a dataset of NVD where teachers perform activities while describing them in detail. We map the teachers' descriptions to perceptual reward detectors, and use them to train corresponding behavioural policies in simulation. We empirically show that our instructable agents (i) learn visual reward detectors using a small number of examples by exploiting hard negative mined configurations from demonstration dynamics, (ii) develop pick- and- place policies using learned visual reward detectors, (iii) benefit from object- factorized state representations that mimic the syntactic structure of natural language goal expressions, and (iv) can execute behaviours that involve novel objects in novel locations at test time, instructed by natural language.
引用
收藏
页码:7004 / 7013
页数:10
相关论文
共 50 条
  • [41] Deep Q-Learning from Demonstrations
    Hester, Todd
    Vecerik, Matej
    Pietquin, Olivier
    Lanctot, Marc
    Schaul, Tom
    Piot, Bilal
    Horgan, Dan
    Quan, John
    Sendonaris, Andrew
    Osband, Ian
    Dulac-Arnold, Gabriel
    Agapiou, John
    Leibo, Joel Z.
    Gruslys, Audrunas
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3223 - 3230
  • [42] Robust Imitation Learning from Noisy Demonstrations
    Tangkaratt, Voot
    Charoenphakdee, Nontawat
    Sugiyama, Masashi
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 298 - +
  • [43] Learning Dialog Policies from Weak Demonstrations
    Gordon-Hall, Gabriel
    Gorinski, Philip John
    Cohen, Shay B.
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 1394 - 1405
  • [44] Learning Periodic Tasks from Human Demonstrations
    Yang, Jingyun
    Zhang, Junwu
    Settle, Connor
    Rai, Akshara
    Antonova, Rika
    Bohg, Jeannette
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 8658 - 8665
  • [45] Learning Adaptive Grasping From Human Demonstrations
    Wang, Shuaijun
    Hu, Wenbin
    Sun, Lining
    Wang, Xin
    Li, Zhibin
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2022, 27 (05) : 3865 - 3873
  • [46] Optimised Learning from Demonstrations for Collaborative Robots
    Wang, Y. Q.
    Hu, Y. D.
    El Zaatari, S.
    Li, W. D.
    Zhou, Y.
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2021, 71
  • [47] Learning Temporal Task Specifications From Demonstrations
    Baert, Mattijs
    Leroux, Sam
    Simoens, Pieter
    EXPLAINABLE AND TRANSPARENT AI AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2024, 2024, 14847 : 81 - 98
  • [48] Learning Rational Subgoals from Demonstrations and Instructions
    Luo, Zhezheng
    Mao, Jiayuan
    Wu, Jiajun
    Lozano-Perez, Tomas
    Tenenbaum, Joshua B.
    Kaelbling, Leslie Pack
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 10, 2023, : 12068 - 12078
  • [49] Efficient sensorimotor learning from multiple demonstrations
    Nemec, Bojan
    Vuga, Rok
    Ude, Ales
    ADVANCED ROBOTICS, 2013, 27 (13) : 1023 - 1031
  • [50] Learning to plan for constrained manipulation from demonstrations
    Mike Phillips
    Victor Hwang
    Sachin Chitta
    Maxim Likhachev
    Autonomous Robots, 2016, 40 : 109 - 124