Adversarial attacks on cooperative multi-agent deep reinforcement learning: a dynamic group-based adversarial example transferability method

被引:2
|
作者
Zan, Lixia [1 ]
Zhu, Xiangbin [1 ]
Hu, Zhao-Long [1 ]
机构
[1] Zhejiang Normal Univ, Coll Math & Comp Sci, Jinhua, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-agent reinforcement learning; Adversarial attack; Dynamic grouping; Transfer attack; Attack efficiency; ROBUSTNESS;
D O I
10.1007/s40747-023-01145-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing research shows that cooperative multi-agent deep reinforcement learning (c-MADRL) is vulnerable to adversarial attacks, and c-MADRL is increasingly being applied to safety-critical domains. However, the robustness of c-MADRL against adversarial attacks has not been fully studied. In the setting of c-MADRL, unlike the single-agent scenario, an adversary can attack multiple agents or all agents at each time step, but the attacker needs more computation to generate adversarial examples and will be more easily detected. Therefore, how the attacker chooses one or several agents instead of all agents to attack is a significant issue in the setting of c-MADRL. Aiming to address this issue, this paper proposes a novel adversarial attack approach, which dynamically groups the agents according to relevant features and selects a group to attack based on the group's contribution to the overall reward, thus effectively reducing the cost and number of attacks, as well as improving attack efficiency and decreasing the chance of attackers being detected. Moreover, we exploit the transferability of adversarial examples to greatly reduce the computational cost of generating adversarial examples. Our method is tested in multi-agent particle environments (MPE) and in StarCraft II. Experimental results demonstrate that our proposed method can effectively degrade the performance of multi-agent deep reinforcement learning algorithms with fewer attacks and lower computational costs.
引用
收藏
页码:7439 / 7450
页数:12
相关论文
共 50 条
  • [31] An overview on multi-agent consensus under adversarial attacks
    Ishii, Hideaki
    Wang, Yuan
    Feng, Shuai
    [J]. ANNUAL REVIEWS IN CONTROL, 2022, 53 : 252 - 272
  • [32] A Deep Reinforcement Learning Method based on Deterministic Policy Gradient for Multi-Agent Cooperative Competition
    Zuo, Xuan
    Xue, Hui-Feng
    Wang, Xiao-Yin
    Du, Wan-Ru
    Tian, Tao
    Gao, Shan
    Zhang, Pu
    [J]. CONTROL ENGINEERING AND APPLIED INFORMATICS, 2021, 23 (03): : 88 - 98
  • [33] A Cooperative Multi-Agent Reinforcement Learning Method Based on Coordination Degree
    Cui, Haoyan
    Zhang, Zhen
    [J]. IEEE ACCESS, 2021, 9 : 123805 - 123814
  • [34] MARNet: Backdoor Attacks Against Cooperative Multi-Agent Reinforcement Learning
    Chen, Yanjiao
    Zheng, Zhicong
    Gong, Xueluan
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (05) : 4188 - 4198
  • [35] Cooperative Learning for Adversarial Multi-Armed Bandit on Open Multi-Agent Systems
    Nakamura, Tomoki
    Hayashi, Naoki
    Inuiguchi, Masahiro
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 1712 - 1717
  • [36] Learning adversarial policy in multiple scenes environment via multi-agent reinforcement learning
    Li, Yang
    Wang, Xinzhi
    Wang, Wei
    Zhang, Zhenyu
    Wang, Jianshu
    Luo, Xiangfeng
    Xie, Shaorong
    [J]. CONNECTION SCIENCE, 2021, 33 (03) : 407 - 426
  • [37] Modifying Neural Networks in Adversarial Agents of Multi-agent Reinforcement Learning Systems
    Fard, Neshat Elhami
    Selmic, Rastko R.
    [J]. 2023 31ST MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION, MED, 2023, : 824 - 829
  • [38] Predicting Driver Behavior on the Highway with Multi-Agent Adversarial Inverse Reinforcement Learning
    Radtke, Henrik
    Bey, Henrik
    Sackmann, Moritz
    Schoen, Torsten
    [J]. 2023 IEEE INTELLIGENT VEHICLES SYMPOSIUM, IV, 2023,
  • [39] Multi-Agent Generative Adversarial Imitation Learning
    Song, Jiaming
    Ren, Hongyu
    Sadigh, Dorsa
    Ermon, Stefano
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [40] Safe Multi-Agent Reinforcement Learning for Wireless Applications Against Adversarial Communications
    Lv, Zefang
    Xiao, Liang
    Chen, Yifan
    Chen, Haoyu
    Ji, Xiangyang
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6824 - 6839