FTPSG: Feature mixture transformer and potential-based subgoal generation for hierarchical multi-agent reinforcement learning

被引:0
|
作者
Nicholaus, Isack Thomas [1 ]
Kang, Dae-Ki [1 ]
机构
[1] Dongseo Univ, Dept Comp Engn, Busan 47011, South Korea
基金
新加坡国家研究基金会;
关键词
Hierarchical reinforcement learning; Subgoal generation; Multi-agent reinforcement learning;
D O I
10.1016/j.eswa.2025.126540
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hierarchical multi-agent reinforcement learning (HMAR) presents a promising approach for addressing complex multi-agent tasks. However, HMAR faces the challenge of identifying potential states or skills-subgoals that agents can efficiently solve. Our paper introduces a novel approach to subgoal generation within HMAR in response to learning signals in sparse delayed reward environments. We propose a Feature Mixture Transformer and Potential-based Subgoal Generation (FTPSG) as an efficient method for automatically generating promising subgoals by extracting and combining relevant features across past observations within a trajectory. Also, FTPSG utilizes a potential function to assess the probability of each subgoal leading agents to the ultimate goal. We design our potential function to rank these subgoals to achieve an actual goal and provide meaningful learning signals. Subgoals are then grouped based on their potential, prioritizing those with high potential as more crucial. This grouping enables agents to concentrate on the most important subgoals initially. We investigate the effectiveness of our proposed method across various multi-agent tasks, and the results consistently show that FTPSG outperforms state-of-the-art methods across all evaluated tasks. These findings affirm FTPSG's promising role in subgoal generation within the HMAR framework.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Generative subgoal oriented multi-agent reinforcement learning through potential field
    Li, Shengze
    Jiang, Hao
    Liu, Yuntao
    Zhang, Jieyuan
    Xu, Xinhai
    Liu, Donghong
    NEURAL NETWORKS, 2024, 179
  • [2] Hierarchical multi-agent reinforcement learning
    Mohammad Ghavamzadeh
    Sridhar Mahadevan
    Rajbala Makar
    Autonomous Agents and Multi-Agent Systems, 2006, 13 : 197 - 229
  • [3] Hierarchical multi-agent reinforcement learning
    Ghavamzadeh, Mohammad
    Mahadevan, Sridhar
    Makar, Rajbala
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2006, 13 (02) : 197 - 229
  • [4] Hierarchical Multi-Agent Training Based on Reinforcement Learning
    Wang, Guanghua
    Li, Wenjie
    Wu, Zhanghua
    Guo, Xian
    2024 9TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS, ACIRS, 2024, : 11 - 18
  • [5] A Multi-agent Path Planning Algorithm Based on Hierarchical Reinforcement Learning and Artificial Potential Field
    Zheng, Yanbin
    Li, Bo
    An, Deyu
    Li, Na
    2015 11TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION (ICNC), 2015, : 363 - 369
  • [6] Hierarchical reinforcement learning based on multi-agent cooperation game theory
    Tang H.
    Dong C.
    International Journal of Wireless and Mobile Computing, 2019, 16 (04): : 369 - 376
  • [7] Studies on hierarchical reinforcement learning in multi-agent environment
    Yu Lasheng
    Marin, Alonso
    Hong Fei
    Lin Jian
    PROCEEDINGS OF 2008 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, SENSING AND CONTROL, VOLS 1 AND 2, 2008, : 1714 - 1720
  • [8] Multi-Agent Hierarchical Reinforcement Learning with Dynamic Termination
    Han, Dongge
    Boehmer, Wendelin
    Wooldridge, Michael
    Rogers, Alex
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2006 - 2008
  • [9] Multi-agent hierarchical reinforcement learning for energy management
    Jendoubi, Imen
    Bouffard, Francois
    APPLIED ENERGY, 2023, 332
  • [10] Multi-agent Hierarchical Reinforcement Learning with Dynamic Termination
    Han, Dongge
    Bohmer, Wendelin
    Wooldridge, Michael
    Rogers, Alex
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2019, 11671 : 80 - 92