Robust Multi-Agent Coordination via Evolutionary Generation of Auxiliary Adversarial Attackers

被引:0
|
作者
Yuan, Lei [1 ,2 ]
Zhang, Ziqian [1 ]
Xue, Ke [1 ]
Yin, Hao [1 ]
Chen, Feng [1 ]
Guan, Cong [1 ]
Li, Lihe [1 ]
Qian, Chao [1 ]
Yu, Yang [1 ,2 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
[2] Polixir Technol, Nanjing 210000, Peoples R China
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cooperative multi-agent reinforcement learning (CMARL) has shown to be promising for many real-world applications. Previous works mainly focus on improving coordination ability via solving MARL-specific challenges (e.g., non-stationarity, credit assignment, scalability), but ignore the policy perturbation issue when testing in a different environment. This issue hasn't been considered in problem formulation or efficient algorithm design. To address this issue, we firstly model the problem as a limited policy adversary Dec-POMDP (LPA-Dec-POMDP), where some coordinators from a team might accidentally and unpredictably encounter a limited number of malicious action attacks, but the regular coordinators still strive for the intended goal. Then, we propose Robust Multi-Agent Coordination via Evolutionary Generation of Auxiliary Adversarial Attackers (ROMANCE), which enables the trained policy to encounter diversified and strong auxiliary adversarial attacks during training, thus achieving high robustness under various policy perturbations. Concretely, to avoid the ego-system overfitting to a specific attacker, we maintain a set of attackers, which is optimized to guarantee the attackers high attacking quality and behavior diversity. The goal of quality is to minimize the ego-system coordination effect, and a novel diversity regularizer based on sparse action is applied to diversify the behaviors among attackers. The ego-system is then paired with a population of attackers selected from the maintained attacker set, and alternately trained against the constantly evolving attackers. Extensive experiments on multiple scenarios from SMAC indicate our ROMANCE provides comparable or better robustness and generalization ability than other baselines.
引用
收藏
页码:11753 / 11762
页数:10
相关论文
共 50 条
  • [1] Communication-robust multi-agent learning by adaptable auxiliary multi-agent adversary generation
    Lei Yuan
    Feng Chen
    Zongzhang Zhang
    Yang Yu
    [J]. Frontiers of Computer Science, 2024, 18
  • [2] Communication-robust multi-agent learning by adaptable auxiliary multi-agent adversary generation
    YUAN Lei
    CHEN Feng
    ZHANG Zongzhang
    YU Yang
    [J]. Frontiers of Computer Science, 2024, 18 (06)
  • [3] Communication-robust multi-agent learning by adaptable auxiliary multi-agent adversary generation
    Yuan, Lei
    Chen, Feng
    Zhang, Zongzhang
    Yu, Yang
    [J]. FRONTIERS OF COMPUTER SCIENCE, 2024, 18 (06)
  • [4] ROBUST COORDINATION CONTROL OF SWITCHING MULTI-AGENT SYSTEMS VIA OUTPUT REGULATION APPROACH
    Wang, Xiaoli
    Han, Fengling
    [J]. KYBERNETIKA, 2011, 47 (05) : 755 - 772
  • [5] Robust Multi-Agent Reinforcement Learning via Adversarial Regularization: Theoretical Foundation and Stable Algorithms
    Bukharin, Alexander
    Li, Yan
    Yu, Yue
    Zhang, Qingru
    Chen, Zhehui
    Zuo, Simiao
    Zhang, Chao
    Zhang, Songan
    Zhao, Tuo
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Simulation-based Simple and Robust Rule Generation for Motion Coordination of Multi-agent System
    Yahagi, Hiroyuki
    Takehisa, Masato
    Shimizu, Shinsuke
    Hara, Tatsunori
    Ota, Jun
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2013), 2013, : 421 - 426
  • [7] Multi-agent coordination via a shared wireless spectrum
    Nowzari, Cameron
    [J]. 2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2017,
  • [8] Robust Multi-Agent Coordination from CaTL plus Specifications
    Liu, Wenliang
    Leahy, Kevin
    Serlin, Zachary
    Belta, Calin
    [J]. 2023 AMERICAN CONTROL CONFERENCE, ACC, 2023, : 3529 - 3534
  • [9] Inheritance Model of the Multiplayer Adversarial Sports Relying on Multi-Agent Evolutionary Algorithm
    Meng, Yixiao
    [J]. BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2020, 126 : 282 - 282
  • [10] Coordination in Adversarial Multi-Agent with Deep Reinforcement Learning under Partial Observability
    Diallo, Elhadji Amadou Oury
    Sugawara, Toshiharu
    [J]. 2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 198 - 205