Learning task goals interactively with visual demonstrations

被引:7
|
作者
Kirk, James [1 ]
Mininger, Aaron [1 ]
Laird, John [1 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
基金
美国国家科学基金会;
关键词
Interactive task learning; Goal learning; Human-robot interaction;
D O I
10.1016/j.bica.2016.08.001
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans are extremely good at quickly teaching and learning new tasks through situated instructions; tasks such as learning a novel game or household chore. From studying such instructional interactions, we have observed that humans excel at communicating information through multiple modalities, including visual, linguistic, and physical ones. Rosie is a tabletop robot implemented in the Soar architecture that learns new tasks from online interactive language instruction. In the past, the features of each task's goal were explicitly described by a human instructor through language. In this work, we develop and study additional techniques for learning representations of goals. For game tasks, the agent can be given visual demonstrations of goal states, refined by human instructions. For procedural tasks, the agent uses information derived from task execution to determine which state features must be included in its goal representations. Using both approaches, Rosie learns correct goal representations from a single goal example or task execution across multiple games, puzzles, and procedural tasks. As expected, in most cases, the number of words required to teach the task is reduced when visual goal demonstrations are used. We also identify shortcomings of our approach and outline future research. (C) 2016 Elsevier B.V. All rights reserved.
引用
收藏
页码:1 / 8
页数:8
相关论文
共 50 条
  • [1] Learning Task Constraints in Visual-Action Planning from Demonstrations
    Esposito, Francesco
    Pek, Christian
    Welle, Michael C.
    Kragic, Danica
    [J]. 2021 30TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2021, : 131 - 138
  • [2] Learning Sequential Tasks Interactively from Demonstrations and Own Experience
    Graeve, Kathrin
    Behnke, Sven
    [J]. 2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 3237 - 3243
  • [3] Interactively learning behavior trees from imperfect human demonstrations
    Scherf, Lisa
    Schmidt, Aljoscha
    Pal, Suman
    Koert, Dorothea
    [J]. FRONTIERS IN ROBOTICS AND AI, 2023, 10
  • [4] Learning Task Priorities From Demonstrations
    Silverio, Joao
    Calinon, Sylvain
    Rozo, Leonel
    Caldwell, Darwin G.
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2019, 35 (01) : 78 - 94
  • [5] Learning Task Specifications from Demonstrations
    Vazquez-Chanlatte, Marcell
    Jha, Susmit
    Tiwari, Ashish
    Ho, Mark K.
    Seshia, Sanjit A.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [6] Imitation learning system to assist human task interactively
    Taoka, So
    Harada, Tatsuya
    Sato, Tomomasa
    Mori, Taketoshi
    [J]. 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12, 2006, : 3655 - +
  • [7] Learning From Visual Demonstrations via Replayed Task-Contrastive Model-Agnostic Meta-Learning
    Hu, Ziye
    Li, Wei
    Gan, Zhongxue
    Guo, Weikun
    Zhu, Jiwei
    Wen, James Zhiqing
    Zhou, Decheng
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (12) : 8756 - 8767
  • [8] Learning from Demonstrations with Partially Observable Task Parameters
    Alizadeh, Tohid
    Calinon, Sylvain
    Caldwell, Darwin G.
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2014, : 3309 - 3314
  • [9] Hope, learning goals, and task performance
    Peterson, SJ
    Gerhardt, MW
    Rode, JC
    [J]. PERSONALITY AND INDIVIDUAL DIFFERENCES, 2006, 40 (06) : 1099 - 1109
  • [10] InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations
    Li, Yunzhu
    Song, Jiaming
    Ermon, Stefano
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30