Learning Task Constraints in Visual-Action Planning from Demonstrations

被引:0
|
作者
Esposito, Francesco [1 ]
Pek, Christian [1 ]
Welle, Michael C. [1 ]
Kragic, Danica [1 ]
机构
[1] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, Div Robot Percept & Learning, S-11428 Stockholm, Sweden
基金
欧洲研究理事会; 瑞典研究理事会;
关键词
D O I
10.1109/RO-MAN50785.2021.9515548
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visual planning approaches have shown great success for decision making tasks with no explicit model of the state space. Learning a suitable representation and constructing a latent space where planning can be performed allows non-experts to setup and plan motions by just providing images. However, learned latent spaces are usually not semantically-interpretable, and thus it is difficult to integrate task constraints. We propose a novel framework to determine whether plans satisfy constraints given demonstrations of policies that satisfy or violate the constraints. The demonstrations are realizations of Linear Temporal Logic formulas which are employed to train Long Short-Term Memory (LSTM) networks directly in the latent space representation. We demonstrate that our architecture enables designers to easily specify, compose and integrate task constraints and achieves high performance in terms of accuracy. Furthermore, this visual planning framework enables human interaction, coping the environment changes that a human worker may involve. We show the flexibility of the method on a box pushing task in a simulated warehouse setting with different task constraints.
引用
收藏
页码:131 / 138
页数:8
相关论文
共 50 条
  • [1] Learning Geometric Constraints of Actions from Demonstrations for Manipulation Task Planning
    Yuan, Jinqiang
    Chew, Chee-Meng
    Subramaniam, Velusamy
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2018, : 636 - 641
  • [2] Learning Shared Safety Constraints from Multi-task Demonstrations
    Kim, Konwoo
    Swamy, Gokul
    Liu, Zuxin
    Zhao, Ding
    Choudhury, Sanjiban
    Wu, Zhiwei Steven
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Learning task goals interactively with visual demonstrations
    Kirk, James
    Mininger, Aaron
    Laird, John
    [J]. BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, 2016, 18 : 1 - 8
  • [4] Learning Task Priorities From Demonstrations
    Silverio, Joao
    Calinon, Sylvain
    Rozo, Leonel
    Caldwell, Darwin G.
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2019, 35 (01) : 78 - 94
  • [5] Learning Task Specifications from Demonstrations
    Vazquez-Chanlatte, Marcell
    Jha, Susmit
    Tiwari, Ashish
    Ho, Mark K.
    Seshia, Sanjit A.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [6] Combined Task and Action Learning from Human Demonstrations for Mobile Manipulation Applications
    Welschehold, Tim
    Abdo, Nichola
    Dornhege, Christian
    Burgard, Wolfram
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4317 - 4324
  • [7] Specificity of task constraints and effects of visual demonstrations and verbal instructions on skill acquisition
    Al-Abood, SA
    Davis, K
    Bennett, SJ
    [J]. JOURNAL OF SPORT & EXERCISE PSYCHOLOGY, 2000, 22 : S13 - S14
  • [8] VISUAL-ACTION CODE PROCESSING BY DEAF AND HEARING CHILDREN
    TODMAN, J
    SEEDHOUSE, E
    [J]. LANGUAGE AND COGNITIVE PROCESSES, 1994, 9 (02): : 129 - 141
  • [9] Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from an Initial Scene Image
    Driess, Danny
    Ha, Jung-Su
    Toussaint, Marc
    [J]. ROBOTICS: SCIENCE AND SYSTEMS XVI, 2020,
  • [10] Learning constraints from demonstrations with grid and parametric representations
    Chou, Glen
    Berenson, Dmitry
    Ozay, Necmiye
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2021, 40 (10-11): : 1255 - 1283