Security Analysis of Poisoning Attacks Against Multi-agent Reinforcement Learning

被引:0
|
作者
Xie, Zhiqiang [1 ]
Xiang, Yingxiao [1 ]
Li, Yike [1 ]
Zhao, Shuang [1 ]
Tong, Endong [1 ]
Niu, Wenjia [1 ]
Liu, Jiqiang [1 ]
Wang, Jian [1 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Secur & Privacy Intelligent Trans, Beijing 100044, Peoples R China
来源
ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT I | 2022年 / 13155卷
基金
国家重点研发计划;
关键词
Reinforcement learning; Multi-agent system; Soft actor-critic; Poisoning attack; Security analysis;
D O I
10.1007/978-3-030-95384-3_41
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
As the closest machine learning method to general artificial intelligence, multi-agent reinforcement learning (MARL) has shown great potential. However, there are few security studies on MARL, and related security problems also appear, especially the serious misleading caused by the poisoning attack on the model. The current research on poisoning attacks for reinforcement learning mainly focuses on single-agent setting, while there are few such studies for multiagent RL. Hence, we propose an analysis framework for the poisoning attack in the MARL system, taking the multi-agent soft actor-critic algorithm, which has the best performance at present, as the target of the poisoning attack. In the framework, we conduct extensive poisoning attacks on the agent's state signal and reward signal from three different aspects: the modes of poisoning attacks, the impact of the timing of poisoning, and the mitigation ability of the MARL system. Experiment results in our framework indicate that 1) compared to the baseline, the random poisoning against state signal reduces the average reward by as high as -65.73%; 2) the timing of poisoning has completely opposite effects on reward-based and state-based attacks; and 3) the agent can completely alleviate the toxicity when the attack interval is 10000 episodes.
引用
收藏
页码:660 / 675
页数:16
相关论文
共 50 条
  • [21] Learning to Share in Multi-Agent Reinforcement Learning
    Yi, Yuxuan
    Li, Ge
    Wang, Yaowei
    Lu, Zongqing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [22] Multi-Agent Reinforcement Learning for Microgrids
    Dimeas, A. L.
    Hatziargyriou, N. D.
    IEEE POWER AND ENERGY SOCIETY GENERAL MEETING 2010, 2010,
  • [23] Hierarchical multi-agent reinforcement learning
    Ghavamzadeh, Mohammad
    Mahadevan, Sridhar
    Makar, Rajbala
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2006, 13 (02) : 197 - 229
  • [24] Multi-agent Exploration with Reinforcement Learning
    Sygkounas, Alkis
    Tsipianitis, Dimitris
    Nikolakopoulos, George
    Bechlioulis, Charalampos P.
    2022 30TH MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION (MED), 2022, : 630 - 635
  • [25] Partitioning in multi-agent reinforcement learning
    Sun, R
    Peterson, T
    FROM ANIMALS TO ANIMATS 6, 2000, : 325 - 332
  • [26] The Dynamics of Multi-Agent Reinforcement Learning
    Dickens, Luke
    Broda, Krysia
    Russo, Alessandra
    ECAI 2010 - 19TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2010, 215 : 367 - 372
  • [27] Multi-agent reinforcement learning: A survey
    Busoniu, Lucian
    Babuska, Robert
    De Schutter, Bart
    2006 9TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, VOLS 1- 5, 2006, : 1133 - +
  • [28] Multi-Agent Reinforcement Learning for Wireless Networks Against Adversarial Communications
    Lv, Zefang
    Chen, Yifan
    Xiao, Liang
    Yang, Helin
    Ji, Xiangyang
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 3409 - 3414
  • [29] Multi-Environment Training Against Reward Poisoning Attacks on Deep Reinforcement Learning
    Bouhaddi, Myria
    Adi, Kamel
    PROCEEDINGS OF THE 20TH INTERNATIONAL CONFERENCE ON SECURITY AND CRYPTOGRAPHY, SECRYPT 2023, 2023, : 870 - 875
  • [30] Formal Reachability Analysis for Multi-Agent Reinforcement Learning Systems
    Wang, Xiaoyan
    Peng, Jun
    Li, Shuqiu
    Li, Bing
    IEEE ACCESS, 2021, 9 : 45812 - 45821