Combined Task and Action Learning from Human Demonstrations for Mobile Manipulation Applications

被引:0
|
作者
Welschehold, Tim [1 ]
Abdo, Nichola [1 ]
Dornhege, Christian [1 ]
Burgard, Wolfram [1 ]
机构
[1] Univ Freiburg, Inst Comp Sci, Freiburg, Germany
关键词
D O I
10.1109/iros40897.2019.8968091
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from demonstrations is a promising paradigm for transferring knowledge to robots. However, learning mobile manipulation tasks directly from a human teacher is a complex problem as it requires learning models of both the overall task goal and of the underlying actions. Additionally, learning from a small number of demonstrations often introduces ambiguity with respect to the intention of the teacher, making it challenging to commit to one model for generalizing the task to new settings. In this paper, we present an approach to learning flexible mobile manipulation action models and task goal representations from teacher demonstrations. Our action models enable the robot to consider different likely outcomes of each action and to generate feasible trajectories for achieving them. Accordingly, we leverage a probabilistic framework based on Monte Carlo tree search to compute sequences of feasible actions imitating the teacher intention in new settings without requiring the teacher to specify an explicit goal state. We demonstrate the effectiveness of our approach in complex tasks carried out in real-world settings.
引用
收藏
页码:4317 / 4324
页数:8
相关论文
共 50 条
  • [1] Learning Mobile Manipulation Actions from Human Demonstrations
    Welschehold, Tim
    Dornhege, Christian
    Burgard, Wolfram
    [J]. 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 3196 - 3201
  • [2] Learning Manipulation Actions from Human Demonstrations
    Welschehold, Tim
    Dornhege, Christian
    Burgard, Wolfram
    [J]. 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 3772 - 3777
  • [3] Learning Geometric Constraints of Actions from Demonstrations for Manipulation Task Planning
    Yuan, Jinqiang
    Chew, Chee-Meng
    Subramaniam, Velusamy
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2018, : 636 - 641
  • [4] Learning Task Constraints in Visual-Action Planning from Demonstrations
    Esposito, Francesco
    Pek, Christian
    Welle, Michael C.
    Kragic, Danica
    [J]. 2021 30TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2021, : 131 - 138
  • [5] Learning Temporal Task Models from Human Bimanual Demonstrations
    Dreher, Christian R. G.
    Asfour, Tam
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 7664 - 7671
  • [6] Learning Dexterous Manipulation for a Soft Robotic Hand from Human Demonstrations
    Gupta, Abhishek
    Eppner, Clemens
    Levine, Sergey
    Abbeel, Pieter
    [J]. 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 3786 - 3793
  • [7] Learning and Generalizing Variable Impedance Manipulation Skills from Human Demonstrations
    Zhang, Yan
    Zhao, Fei
    Liao, Zhiwei
    [J]. 2022 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2022, : 810 - 815
  • [8] Learning Task Priorities From Demonstrations
    Silverio, Joao
    Calinon, Sylvain
    Rozo, Leonel
    Caldwell, Darwin G.
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2019, 35 (01) : 78 - 94
  • [9] Learning Task Specifications from Demonstrations
    Vazquez-Chanlatte, Marcell
    Jha, Susmit
    Tiwari, Ashish
    Ho, Mark K.
    Seshia, Sanjit A.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [10] Learning Manipulation Actions from a Few Demonstrations
    Abdo, Nichola
    Kretzschmar, Henrik
    Spinello, Luciano
    Stachniss, Cyrill
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 1268 - 1275