Hybrid Policy Learning for Multi-Agent Pathfinding

被引:9
|
作者
Skrynnik, Alexey [1 ]
Yakovleva, Alexandra [2 ]
Davydov, Vasilii [2 ]
Yakovlev, Konstantin [1 ,2 ]
Panov, Aleksandr I. [1 ,2 ]
机构
[1] Russian Acad Sci, Fed Res Ctr Comp Sci & Control, Moscow 119333, Russia
[2] Moscow Inst Phys & Technol, Dolgoprudnyi 141700, Moscow Region, Russia
来源
IEEE ACCESS | 2021年 / 9卷
关键词
Reinforcement learning; Planning; Task analysis; Autonomous vehicles; Navigation; Costs; Monte Carlo methods; Multiagent systems; path planning; machine learning; intelligent transportation systems; reinforcement learning; Monte-Carlo Tree Search; GO; NETWORKS;
D O I
10.1109/ACCESS.2021.3111321
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work we study the behavior of groups of autonomous vehicles, which are the part of the Internet of Vehicles systems. One of the challenging modes of operation of such systems is the case when the observability of each vehicle is limited and the global/local communication is unstable, e.g. in the crowded parking lots. In such scenarios the vehicles have to rely on the local observations and exhibit cooperative behavior to ensure safe and efficient trips. This type of problems can be abstracted to the so-called multi-agent pathfinding when a group of agents, confined to a graph, have to find collision-free paths to their goals (ideally, minimizing an objective function e.g. travel time). Widely used algorithms for solving this problem rely on the assumption that a central controller exists for which the full state of the environment (i.e. the agents current positions, their targets, configuration of the static obstacles etc.) is known and they cannot be straightforwardly be adapted to the partially-observable setups. To this end, we suggest a novel approach which is based on the decomposition of the problem into the two sub-tasks: reaching the goal and avoiding the collisions. To accomplish each of this task we utilize reinforcement learning methods such as Deep Monte Carlo Tree Search, Q-mixing networks, and policy gradients methods to design the policies that map the agents' observations to actions. Next, we introduce the policy-mixing mechanism to end up with a single hybrid policy that allows each agent to exhibit both types of behavior - the individual one (reaching the goal) and the cooperative one (avoiding the collisions with other agents). We conduct an extensive empirical evaluation that shows that the suggested hybrid-policy outperforms standalone stat-of-the-art reinforcement learning methods for this kind of problems by a notable margin.
引用
收藏
页码:126034 / 126047
页数:14
相关论文
共 50 条
  • [41] A Systematic Literature Review of Multi-agent Pathfinding for Maze Research
    Tjiharjadi, Semuil
    Razali, Sazalinsyah
    Sulaiman, Hamzah Asyrani
    JOURNAL OF ADVANCES IN INFORMATION TECHNOLOGY, 2022, 13 (04) : 358 - 367
  • [42] Multi-agent Pathfinding Based on Improved Cooperative A* in Kiva System
    Liu, Yiming
    Chen, Mengxia
    Huang, Hejiao
    CONFERENCE PROCEEDINGS OF 2019 5TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS (ICCAR), 2019, : 633 - 638
  • [43] Continuous optimisation problem and game theory for multi-agent pathfinding
    Alexander V. Kuznetsov
    Andrew Schumann
    Małgorzata Rataj
    International Journal of Game Theory, 2024, 53 : 1 - 41
  • [44] Conflict-based search for optimal multi-agent pathfinding
    Sharon, Guni
    Stern, Roni
    Felner, Ariel
    Sturtevant, Nathan R.
    ARTIFICIAL INTELLIGENCE, 2015, 219 : 40 - 66
  • [45] The increasing cost tree search for optimal multi-agent pathfinding
    Sharon, Guni
    Stern, Roni
    Goldenberg, Meir
    Felner, Ariel
    ARTIFICIAL INTELLIGENCE, 2013, 195 : 470 - 495
  • [46] SCRIMP: Scalable Communication for Reinforcement- and Imitation-Learning-Based Multi-Agent Pathfinding
    Wang, Yutong
    Xiang, Bairan
    Huang, Shinan
    Sartoretti, Guillaume
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 9301 - 9308
  • [47] PRIMAL2: Pathfinding Via Reinforcement and Imitation Multi-Agent Learning-Lifelong
    Damani, Mehul
    Luo, Zhiyao
    Wenzel, Emerson
    Sartoretti, Guillaume
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 2666 - 2673
  • [48] Continuous optimisation problem and game theory for multi-agent pathfinding
    Kuznetsov, Alexander V.
    Schumann, Andrew
    Rataj, Malgorzata
    INTERNATIONAL JOURNAL OF GAME THEORY, 2024, 53 (01) : 1 - 41
  • [49] Improving LaCAM for Scalable Eventually Optimal Multi-Agent Pathfinding
    Okumura, Keisuke
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 243 - 251
  • [50] Using Hierarchical Constraints to Avoid Conflicts in Multi-Agent Pathfinding
    Walker, Thayne T.
    Chan, David M.
    Sturtevant, Nathan R.
    TWENTY-SEVENTH INTERNATIONAL CONFERENCE ON AUTOMATED PLANNING AND SCHEDULING, 2017, : 316 - 324