Generative subgoal oriented multi-agent reinforcement learning through potential field

被引:0
|
作者
Li, Shengze [1 ]
Jiang, Hao [1 ]
Liu, Yuntao [1 ]
Zhang, Jieyuan [1 ]
Xu, Xinhai [1 ]
Liu, Donghong [1 ]
机构
[1] Acad Mil Sci, Beijing 100000, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-agent reinforcement learning; Subgoal generation; Potential field;
D O I
10.1016/j.neunet.2024.106552
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-agent reinforcement learning (MARL) effectively improves the learning speed of agents in sparse reward tasks with the guide of subgoals. However, existing works sever the consistency of the learning objectives of the subgoal generation and subgoal reached stages, thereby significantly inhibiting the effectiveness of subgoal learning. To address this problem, we propose a novel Potential field Subgoal-based Multi-Agent reinforcement learning (PSMA) method, which introduces the potential field (PF) to unify the two-stage learning objectives. Specifically, we design a state-to-PF representation model that describes agents' states as potential fields, allowing easy measurement of the interaction effect for both allied and enemy agents. With the PF representation, a subgoal selector is designed to automatically generate multiple subgoals for each agent, drawn from the experience replay buffer that contains both individual and total PF values. Based on the determined subgoals, we define an intrinsic reward function to guide the agent to reach their respective subgoals while maximizing the joint action-value. Experimental results show that our method outperforms the state-of-the-art MARL method on both StarCraft II micro-management (SMAC) and Google Research Football (GRF) tasks with sparse reward settings.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Multi-Agent Reinforcement Learning with Multi-Step Generative Models
    Krupnik, Orr
    Mordatch, Igor
    Tamar, Aviv
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [2] FTPSG: Feature mixture transformer and potential-based subgoal generation for hierarchical multi-agent reinforcement learning
    Nicholaus, Isack Thomas
    Kang, Dae-Ki
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 270
  • [3] Mean Field Multi-Agent Reinforcement Learning
    Yang, Yaodong
    Luo, Rui
    Li, Minne
    Zhou, Ming
    Zhang, Weinan
    Wang, Jun
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [4] Adaptive mean field multi-agent reinforcement learning
    Wang, Xiaoqiang
    Ke, Liangjun
    Zhang, Gewei
    Zhu, Dapeng
    INFORMATION SCIENCES, 2024, 669
  • [5] Causal Mean Field Multi-Agent Reinforcement Learning
    Ma, Hao
    Pu, Zhiqiang
    Pan, Yi
    Liu, Boyin
    Gao, Junlong
    Guo, Zhenyu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [6] Drone mapping through multi-agent reinforcement learning
    Zanol, Riccardo
    Chiariotti, Federico
    Zanella, Andrea
    2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2019,
  • [7] Multi-Agent Reinforcement Learning
    Stankovic, Milos
    2016 13TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2016, : 43 - 43
  • [8] Target-Oriented Multi-Agent Coordination with Hierarchical Reinforcement Learning
    Yu, Yuekang
    Zhai, Zhongyi
    Li, Weikun
    Ma, Jianyu
    APPLIED SCIENCES-BASEL, 2024, 14 (16):
  • [9] Interpersonal trust modelling through multi-agent Reinforcement Learning
    Frey, Vincent
    Martinez, Julian
    COGNITIVE SYSTEMS RESEARCH, 2024, 83
  • [10] Multi-Agent Cognition Difference Reinforcement Learning for Multi-Agent Cooperation
    Wang, Huimu
    Qiu, Tenghai
    Liu, Zhen
    Pu, Zhiqiang
    Yi, Jianqiang
    Yuan, Wanmai
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,