Diversifying behaviors for learning in Asymmetric Multiagent Systems

被引:2
|
作者
Dixit, Gaurav [1 ]
Gonzalez, Everardo [1 ]
Tumer, Kagan [1 ]
机构
[1] Oregon State Univ, Corvallis, OR 97331 USA
基金
美国国家科学基金会;
关键词
Adaptive Team Balancing; Quality Diversity; Multiagent learning; Evolution;
D O I
10.1145/3512290.3528860
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
To achieve coordination in multiagent systems such as air traffic control or search and rescue, agents must not only evolve their policies, but also adapt to the behaviors of other agents. However, extending coevolutionary algorithms to complex domains is difficult because agents evolve in the dynamic environment created by the changing policies of other agents. This problem is exacerbated when the teams consist of diverse asymmetric agents (agents with different capabilities and objectives), making it difficult for agents to evolve complementary policies. Quality-Diversity methods solve part of the problem by allowing agents to discover not just optimal, but diverse behaviors, but are computationally intractable in multiagent settings. This paper introduces a multiagent learning framework to allow asymmetric agents to specialize and explore diverse behaviors needed for coordination in a shared environment. The key insight of this work is that a hierarchical decomposition of diversity search, fitness optimization, and team composition modeling allows the fitness on the team-wide objective to direct the diversity search in a dynamic environment. Experimental results in multiagent environments with temporal and spatial coupling requirements demonstrate the diversity of acquired agent synergies in response to a changing environment and team compositions.
引用
收藏
页码:350 / 358
页数:9
相关论文
共 50 条
  • [21] A survey on transfer learning for multiagent reinforcement learning systems
    Da Silva, Felipe Leno
    Reali Costa, Anna Helena
    Journal of Artificial Intelligence Research, 2019, 64 : 645 - 703
  • [22] A Survey on Transfer Learning for Multiagent Reinforcement Learning Systems
    Da Silva, Felipe Leno
    Reali Costa, Anna Helena
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2019, 64 : 645 - 703
  • [23] Multiagent Reinforcement Social Learning toward Coordination in Cooperative Multiagent Systems
    Hao, Jianye
    Leung, Ho-Fung
    Ming, Zhong
    ACM TRANSACTIONS ON AUTONOMOUS AND ADAPTIVE SYSTEMS, 2015, 9 (04)
  • [24] Gradient based method for symmetric and asymmetric multiagent reinforcement learning
    Könönen, V
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING, 2003, 2690 : 68 - 75
  • [25] Using asymmetric keys in a certified trust model for multiagent systems
    Botelho, Vanderson
    Enembreck, Fabricio
    Avila, Braulio
    de Azevedo, Hilton
    Scalabrin, Edson
    EXPERT SYSTEMS WITH APPLICATIONS, 2011, 38 (02) : 1233 - 1240
  • [26] Improving the Language Active Learning with Multiagent Systems
    Pinzon, Cristian
    Lopez, Vivian
    Bajo, Javier
    Corchado, Juan M.
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING, PROCEEDINGS, 2009, 5788 : 719 - 726
  • [27] Learning coordination strategies for cooperative multiagent systems
    Ho, F
    Kamel, M
    MACHINE LEARNING, 1998, 33 (2-3) : 155 - 177
  • [28] Dependable learning-enabled multiagent systems
    Huang, Xiaowei
    Peng, Bei
    Zhao, Xingyu
    AI COMMUNICATIONS, 2022, 35 (04) : 407 - 420
  • [29] The dynamics of reinforcement learning in cooperative multiagent systems
    Claus, C
    Boutilier, C
    FIFTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-98) AND TENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICAL INTELLIGENCE (IAAI-98) - PROCEEDINGS, 1998, : 746 - 752
  • [30] Curriculum Learning for Tightly Coupled Multiagent Systems
    Rockefeller, Golden
    Mannion, Patrick
    Tumer, Kagan
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2174 - 2176