Learning Sensorimotor Primitives of Sequential Manipulation Tasks from Visual Demonstrations

被引:4
|
作者
Liang, Junchi [1 ]
Wen, Bowen [1 ]
Bekris, Kostas [1 ]
Boularias, Abdeslam [1 ]
机构
[1] Rutgers State Univ, Dept Comp Sci, New Brunswick, NJ 08901 USA
关键词
D O I
10.1109/ICRA46639.2022.9811703
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This work aims to learn how to perform complex robot manipulation tasks that are composed of several, consecutively executed low-level sub-tasks, given as input a few visual demonstrations of the tasks performed by a person. The sub-tasks consist of moving the robot's end-effector until it reaches a sub-goal region in the task space, performing an action, and triggering the next sub-task when a pre-condition is met. Most prior work in this domain has been concerned with learning only low-level tasks, such as hitting a ball or reaching an object and grasping it. This paper describes a new neural network-based framework for learning simultaneously low-level policies as well as high-level policies, such as deciding which object to pick next or where to place it relative to other objects in the scene. A key feature of the proposed approach is that the policies are learned directly from raw videos of task demonstrations, without any manual annotation or post-processing of the data. Empirical results on object manipulation tasks with a robotic arm show that the proposed network can efficiently learn from real visual demonstrations to perform the tasks, and outperforms popular imitation learning algorithms.
引用
收藏
页码:8591 / 8597
页数:7
相关论文
共 50 条
  • [1] Learning Compliant Manipulation Tasks from Force Demonstrations
    Duan, Jianghua
    Ou, Yongsheng
    Xu, Sheng
    Wang, Zhiyang
    Peng, Ansi
    Wu, Xinyu
    Feng, Wei
    2018 IEEE INTERNATIONAL CONFERENCE ON CYBORG AND BIONIC SYSTEMS (CBS), 2018, : 449 - 454
  • [2] Learning sequential constraints of tasks from user demonstrations
    Pardowitz, M
    Zöllner, R
    Dillmann, R
    2005 5TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS, 2005, : 424 - 429
  • [3] Learning to Perform Visual Tasks from Human Demonstrations
    Nunes, Afonso
    Figueiredo, Rui
    Moreno, Plinio
    PATTERN RECOGNITION AND IMAGE ANALYSIS, IBPRIA 2019, PT II, 2019, 11868 : 346 - 358
  • [4] Learning Sequential Tasks Interactively from Demonstrations and Own Experience
    Graeve, Kathrin
    Behnke, Sven
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 3237 - 3243
  • [5] Inverse KKT: Learning cost functions of manipulation tasks from demonstrations
    Englert, Peter
    Ngo Anh Vien
    Toussaint, Marc
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (13-14): : 1474 - 1488
  • [6] Inverse KKT - Learning Cost Functions of Manipulation Tasks from Demonstrations
    Englert, Peter
    Toussaint, Marc
    ROBOTICS RESEARCH, VOL 2, 2018, 3 : 57 - 72
  • [7] Learning and Synchronization of Movement Primitives for Bimanual Manipulation Tasks
    Thota, Pavan Kumar
    Ravichandar, Harish Chaandar
    Dani, Ashwin P.
    2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 945 - 950
  • [8] Learning Multiple Robot Manipulation Tasks with Imperfect Demonstrations
    Dai, Jiahua
    Lin, Xiangbo
    Li, Jianwen
    2023 7TH INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION SCIENCES, ICRAS, 2023, : 6 - 11
  • [9] Learning to Sequence Movement Primitives from Demonstrations
    Manschitz, Simon
    Kober, Jens
    Gienger, Michael
    Peters, Jan
    2014 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2014), 2014, : 4414 - 4421
  • [10] Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks
    Nasiriany, Soroush
    Liu, Huihan
    Zhu, Yuke
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7477 - 7484