A Hierarchical Robot Learning Framework for Manipulator Reactive Motion Generation via Multi-Agent Reinforcement Learning and Riemannian Motion Policies

被引:1
|
作者
Wang, Yuliu [1 ,2 ]
Sagawa, Ryusuke [1 ,2 ]
Yoshiyasu, Yusuke [2 ]
机构
[1] Univ Tsukuba, Intelligent & Mech Interact Syst Program, Tsukuba, Ibaraki 3058577, Japan
[2] Natl Inst Adv Ind Sci & Technol, Artificial Intelligence Res Ctr, Comp Vis Res Team, Tsukuba, Ibaraki 3058560, Japan
基金
日本学术振兴会;
关键词
Riemannian motion policies; motion generation; motion planning; robot learning; multi-agent reinforcement learning; hierarchical reinforcement learning;
D O I
10.1109/ACCESS.2023.3324039
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Manipulators motion planning faces new challenges as robots are increasingly used in dense, cluttered and dynamic environments. The recently proposed technique called Riemannian motion policies(RMPs) provides an elegant solution with clear mathematical interpretations to such challenging scenarios. It is based on differential geometry policies that generate reactive motions in dynamic environments with real-time performance. However, designing and combining RMPs is still a difficult task involving extensive parameter tuning, and typically seven or more RMPs need to be combined by using RMPflow to realize motions of a robot manipulator with more than 6 degrees-of-freedoms, where the RMPs parameters have to be empirically set each time. In this paper, we take a policy to decompose such complex policies into multiple learning modules based on reinforcement learning. Specifically, we propose a three-layer robot learning framework that consists of the basic-level, middle-level and top-level layers. At the basic layer, only two base RMPs i.e. target and collision avoidance are used to output reactive actions. At the middle-level layer, a hierarchical reinforcement learning approach is used to train an agent that automatically selects those RMPs and their parameters based on environmental changes and will be deployed at each joint. At the top-level layer, a multi-agent reinforcement learning approach trains all the joints with high-level collaborative policies to accomplish actions such as tracking a target and avoiding obstacles. With simulation experiments, we compare the proposed method with the baseline method and find that our method effectively produces superior actions and is better at avoiding obstacles, handling self-collisions, and avoiding singularities in dynamic environments. In addition, the proposed framework possesses higher training efficiency while leveraging the generalization ability of reinforcement learning to dynamic environments and improving safety and interpretability.
引用
收藏
页码:126979 / 126994
页数:16
相关论文
共 50 条
  • [41] Hierarchical Consensus-Based Multi-Agent Reinforcement Learning for Multi-Robot Cooperation Tasks
    Feng, Pu
    Liang, Junkang
    Wang, Size
    Yu, Xin
    Ji, Xin
    Chen, Yiting
    Zhang, Kui
    Shi, Rongye
    Wu, Wenjun
    2024 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS 2024, 2024, : 642 - 649
  • [42] Multi-Agent Reinforcement Learning
    Stankovic, Milos
    2016 13TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2016, : 43 - 43
  • [43] Hierarchical Architecture for Multi-Agent Reinforcement Learning in Intelligent Game
    Li, Bin
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [44] Constructing a hierarchical ontology for reinforcement learning multi-agent system
    Yu, XL
    Wang, L
    Cui, DH
    ISTM/2003: 5TH INTERNATIONAL SYMPOSIUM ON TEST AND MEASUREMENT, VOLS 1-6, CONFERENCE PROCEEDINGS, 2003, : 1249 - 1252
  • [45] Hierarchical Multi-Agent Reinforcement Learning for Air Combat Maneuvering
    Selmonaj, Ardian
    Szehr, Oleg
    Del Rio, Giacomo
    Antonucci, Alessandro
    Schneider, Adrian
    Ruegsegger, Michael
    22ND IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA 2023, 2023, : 1031 - 1038
  • [46] Multi-agent hierarchical reinforcement learning by integrating options into MAXQ
    Shen, Jing
    Gu, Guochang
    Liu, Haibo
    FIRST INTERNATIONAL MULTI-SYMPOSIUMS ON COMPUTER AND COMPUTATIONAL SCIENCES (IMSCCS 2006), PROCEEDINGS, VOL 1, 2006, : 676 - +
  • [47] Multi-agent event triggered hierarchical security reinforcement learning
    Sun, Hui-Hui
    Hu, Chun-He
    Zhang, Jun-Guo
    Kongzhi yu Juece/Control and Decision, 2024, 39 (11): : 3755 - 3762
  • [48] Deep Hierarchical Communication Graph in Multi-Agent Reinforcement Learning
    Liu, Zeyang
    Wan, Lipeng
    Sui, Xue
    Chen, Zhuoran
    Sun, Kewu
    Lan, Xuguang
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 208 - 216
  • [49] Distributed hierarchical reinforcement learning in multi-agent adversarial environments
    Naderializadeh, Navid
    Soleyman, Sean
    Hung, Fan
    Khosla, Deepak
    Chen, Yang
    Fadaie, Joshua G.
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS IV, 2022, 12113
  • [50] Trajectory planning of space manipulator based on multi-agent reinforcement learning
    Zhao Y.
    Guan G.
    Guo J.
    Yu X.
    Yan P.
    Hangkong Xuebao/Acta Aeronautica et Astronautica Sinica, 2021, 42 (01):