Discovering Synergies for Robot Manipulation with Multi-Task Reinforcement Learning

被引:3
|
作者
He, Zhanpeng [1 ]
Ciocarlie, Matei [2 ]
机构
[1] Columbia Univ, Dept Comp Sci, New York, NY 10027 USA
[2] Columbia Univ, Dept Mechani Cal Engn, New York, NY USA
关键词
D O I
10.1109/ICRA46639.2022.9812170
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Controlling robotic manipulators with high dimensional action spaces for dexterous tasks is a challenging problem. Inspired by human manipulation, researchers have studied generating and using postural synergies for robot hands to accomplish manipulation tasks, leveraging the lower dimensional nature of synergistic action spaces. However, many of these works require pre-collected data from an existing controller in order to derive such a subspace by means of dimensionality reduction. In this paper, we present a framework that simultaneously discovers both a synergy space and a multitask policy that operates on this low-dimensional action space to accomplish diverse manipulation tasks. We demonstrate that our end-to-end method is able to perform multiple tasks using few synergies, and outperforms sequential methods that apply dimensionality reduction to independently collected data. We also show that deriving synergies using multiple tasks can lead to a subspace that enables robots to efficiently learn new manipulation tasks and interactions with new objects.
引用
收藏
页码:2714 / 2721
页数:8
相关论文
共 50 条
  • [1] Decision making on robot with multi-task using deep reinforcement learning for each task
    Shimoguchi, Yuya
    Kurashige, Kentarou
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 3460 - 3465
  • [2] Multi-task reinforcement learning in humans
    Momchil S. Tomov
    Eric Schulz
    Samuel J. Gershman
    [J]. Nature Human Behaviour, 2021, 5 : 764 - 773
  • [3] Multi-task reinforcement learning in humans
    Tomov, Momchil S.
    Schulz, Eric
    Gershman, Samuel J.
    [J]. NATURE HUMAN BEHAVIOUR, 2021, 5 (06) : 764 - +
  • [4] Sparse Multi-Task Reinforcement Learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [5] Multi-task Learning with Modular Reinforcement Learning
    Xue, Jianyong
    Alexandre, Frederic
    [J]. FROM ANIMALS TO ANIMATS 16, 2022, 13499 : 127 - 138
  • [6] Sparse multi-task reinforcement learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    [J]. INTELLIGENZA ARTIFICIALE, 2015, 9 (01) : 5 - 20
  • [7] Unsupervised Task Clustering for Multi-task Reinforcement Learning
    Ackermann, Johannes
    Richter, Oliver
    Wattenhofer, Roger
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, 2021, 12975 : 222 - 237
  • [8] Multi-task Batch Reinforcement Learning with Metric Learning
    Li, Jiachen
    Quan Vuong
    Liu, Shuang
    Liu, Minghua
    Ciosek, Kamil
    Christensen, Henrik
    Su, Hao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [9] Multi-Task Deep Reinforcement Learning with PopArt
    Hessel, Matteo
    Soyer, Hubert
    Espeholt, Lasse
    Czarnecki, Wojciech
    Schmitt, Simon
    van Hasselt, Hado
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3796 - 3803
  • [10] A Survey of Multi-Task Deep Reinforcement Learning
    Vithayathil Varghese, Nelson
    Mahmoud, Qusay H.
    [J]. ELECTRONICS, 2020, 9 (09) : 1 - 21