Leveraging the efficiency of multi-task robot manipulation via task-evoked planner and reinforcement learning

被引:1
|
作者
Qi, Haofu [1 ,2 ]
Zheng, Haoyang [1 ,2 ]
Shao, Jun [1 ,2 ]
Zhang, Jiatao [1 ,2 ]
Gu, Jason [2 ]
Song, Wei [1 ,2 ]
Zhu, Shiqiang [1 ]
机构
[1] Zhejiang Univ, Hangzhou 310030, Peoples R China
[2] Zhejiang Lab, Res Inst Interdisciplinary Innovat, Res Ctr Intelligent Robot, Hangzhou 311100, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICRA57147.2024.10611076
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-task learning has expanded the boundaries of robotic manipulation, enabling the execution of increasingly complex tasks. However, policies learned through reinforcement learning exhibit limited generalization and narrow distributions, which restrict their effectiveness in multi-task training. Addressing the challenge of obtaining policies with generalization and stability represents a non-trivial problem. To tackle this issue, we propose a planning-guided reinforcement learning method. It leverages a task-evoked planner(TEP) and a reinforcement learning approach with planner's guidance. TEP utilizes reusable samples as the source, with the aim of learning reachability information across different task scenarios. Then in reinforcement learning, TEP assesses and guides the Actor towards better outputs and smoothly enhances the performance in multi-task benchmarks. We evaluate this approach within the Meta-World framework and compare it with prior works in terms of learning efficiency and effectiveness. Depending on experimental results, our method has more efficiency, higher success rates, and demonstrates more realistic behavior.
引用
收藏
页码:9220 / 9226
页数:7
相关论文
共 50 条
  • [1] Discovering Synergies for Robot Manipulation with Multi-Task Reinforcement Learning
    He, Zhanpeng
    Ciocarlie, Matei
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 2714 - 2721
  • [2] Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning
    He, Haoran
    Bai, Chenjia
    Xu, Kang
    Yang, Zhuoran
    Zhang, Weinan
    Wang, Dong
    Zhao, Bin
    Li, Xuelong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Decision making on robot with multi-task using deep reinforcement learning for each task
    Shimoguchi, Yuya
    Kurashige, Kentarou
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 3460 - 3465
  • [4] Unsupervised Task Clustering for Multi-task Reinforcement Learning
    Ackermann, Johannes
    Richter, Oliver
    Wattenhofer, Roger
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, 2021, 12975 : 222 - 237
  • [5] AMP: Multi-Task Transfer Learning via Leveraging Attention Mechanism on Task Embeddings
    Yu, Yangyang
    Wang, Keru
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2025, 39 (02)
  • [6] Multi-task reinforcement learning in humans
    Momchil S. Tomov
    Eric Schulz
    Samuel J. Gershman
    Nature Human Behaviour, 2021, 5 : 764 - 773
  • [7] Multi-Task Reinforcement Learning for Quadrotors
    Xing, Jiaxu
    Geles, Ismail
    Song, Yunlong
    Aljalbout, Elie
    Scaramuzza, Davide
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03): : 2112 - 2119
  • [8] Multi-task reinforcement learning in humans
    Tomov, Momchil S.
    Schulz, Eric
    Gershman, Samuel J.
    NATURE HUMAN BEHAVIOUR, 2021, 5 (06) : 764 - +
  • [9] Sparse Multi-Task Reinforcement Learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [10] Sparse multi-task reinforcement learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    INTELLIGENZA ARTIFICIALE, 2015, 9 (01) : 5 - 20