Leveraging the efficiency of multi-task robot manipulation via task-evoked planner and reinforcement learning

被引:1
|
作者
Qi, Haofu [1 ,2 ]
Zheng, Haoyang [1 ,2 ]
Shao, Jun [1 ,2 ]
Zhang, Jiatao [1 ,2 ]
Gu, Jason [2 ]
Song, Wei [1 ,2 ]
Zhu, Shiqiang [1 ]
机构
[1] Zhejiang Univ, Hangzhou 310030, Peoples R China
[2] Zhejiang Lab, Res Inst Interdisciplinary Innovat, Res Ctr Intelligent Robot, Hangzhou 311100, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICRA57147.2024.10611076
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-task learning has expanded the boundaries of robotic manipulation, enabling the execution of increasingly complex tasks. However, policies learned through reinforcement learning exhibit limited generalization and narrow distributions, which restrict their effectiveness in multi-task training. Addressing the challenge of obtaining policies with generalization and stability represents a non-trivial problem. To tackle this issue, we propose a planning-guided reinforcement learning method. It leverages a task-evoked planner(TEP) and a reinforcement learning approach with planner's guidance. TEP utilizes reusable samples as the source, with the aim of learning reachability information across different task scenarios. Then in reinforcement learning, TEP assesses and guides the Actor towards better outputs and smoothly enhances the performance in multi-task benchmarks. We evaluate this approach within the Meta-World framework and compare it with prior works in terms of learning efficiency and effectiveness. Depending on experimental results, our method has more efficiency, higher success rates, and demonstrates more realistic behavior.
引用
收藏
页码:9220 / 9226
页数:7
相关论文
共 50 条
  • [41] Episodic memory transfer for multi-task reinforcement learning
    Sorokin, Artyom Y.
    Burtsev, Mikhail S.
    BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, 2018, 26 : 91 - 95
  • [42] Accounting for Task-Difficulty in Active Multi-Task Robot Control Learning
    Fabisch, Alexander
    Metzen, Jan Hendrik
    Krell, Mario Michael
    Kirchner, Frank
    KUNSTLICHE INTELLIGENZ, 2015, 29 (04): : 369 - 377
  • [43] LEVERAGING VALENCE AND ACTIVATION INFORMATION VIA MULTI-TASK LEARNING FOR CATEGORICAL EMOTION RECOGNITION
    Xia, Rui
    Liu, Yang
    2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP), 2015, : 5301 - 5305
  • [44] Leveraging Multi-task Learning for Biomedical Named Entity Recognition
    Mehmood, Tahir
    Gerevini, Alfonso
    Lavelli, Alberto
    Serina, Ivan
    ADVANCES IN ARTIFICIAL INTELLIGENCE, AI*IA 2019, 2019, 11946 : 431 - 444
  • [45] Impact of Task-evoked Mental Workloads on Oculo-motor Indices during a Manipulation Task
    Nakayama, Minoru
    Hayakawa, Yoshiya
    PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, VOL 4: BIOSIGNALS, 2020, : 274 - 279
  • [46] Heterogeneous Multi-robot Task Allocation and Scheduling via Reinforcement Learning
    Dai, Weiheng
    Rai, Utkarsh
    Chiun, Jimmy
    Cao, Yuhong
    Sartoretti, Guillaume
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03): : 2654 - 2661
  • [47] Multi-Task Reinforcement Learning in Reproducing Kernel Hilbert Spaces via Cross-Learning
    Cervino, Juan
    Bazerque, Juan Andres
    Calvo-Fullana, Miguel
    Ribeiro, Alejandro
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 5947 - 5962
  • [48] Bayesian Multi-Task Learning MPC for Robotic Mobile Manipulation
    Arcari, Elena
    Minniti, Maria Vittoria
    Scampicchio, Anna
    Carron, Andrea
    Farshidian, Farbod
    Hutter, Marco
    Zeilinger, Melanie N.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (06) : 3222 - 3229
  • [49] Computational task offloading algorithm based on deep reinforcement learning and multi-task dependency
    Zhang, Xiaoqi
    Lin, Tengxiang
    Lin, Cheng-Kuan
    Chen, Zhen
    Cheng, Hongju
    THEORETICAL COMPUTER SCIENCE, 2024, 993
  • [50] Learning Sparse Task Relations in Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2914 - 2920