Perception Poisoning Attacks in Federated Learning

被引:10
|
作者
Chow, Ka-Ho [1 ]
Liu, Ling [1 ]
机构
[1] Georgia Inst Technol, Sch Comp Sci, Atlanta, GA 30332 USA
基金
美国国家科学基金会;
关键词
federated learning; object detection; data poisoning; deep neural networks; DEFENSES;
D O I
10.1109/TPSISA52974.2021.00017
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) enables decentralized training of deep neural networks (DNNs) for object detection over a distributed population of clients. It allows edge clients to keep their data local and only share parameter updates with a federated server. However, the distributed nature of FL also opens doors to new threats. In this paper, we present targeted perception poisoning attacks against federated object detection learning in which a subset of malicious clients seeks to poison the federated training of a global object detection model by sharing perception-poisoned local model parameters. We first introduce three targeted perception poisoning attacks, which have severe adverse effects only on the objects under attack. We then analyze the attack feasibility, the impact of malicious client availability, and attack timing. To safeguard FL systems against such contagious threats, we introduce spatial signature analysis as a defense to separate benign local model parameters from poisoned local model contributions, identify malicious clients, and eliminate their impact on the federated training. Extensive experiments on object detection benchmark datasets validate that the defense-empowered federated object detection learning can improve the robustness against all three types of perception poisoning attacks. The source code is available at https://github.com/git-disl/Perception-Poisoning.
引用
收藏
页码:146 / 155
页数:10
相关论文
共 50 条
  • [1] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    IEEE ACCESS, 2023, 11 : 10708 - 10722
  • [2] Mitigating Poisoning Attacks in Federated Learning
    Ganjoo, Romit
    Ganjoo, Mehak
    Patil, Madhura
    INNOVATIVE DATA COMMUNICATION TECHNOLOGIES AND APPLICATION, ICIDCA 2021, 2022, 96 : 687 - 699
  • [3] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [4] Fair Detection of Poisoning Attacks in Federated Learning
    Singh, Ashneet Khandpur
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    Sanchez, David
    Rebollo-Monedero, David
    2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 224 - 229
  • [5] Adversarial Poisoning Attacks on Federated Learning in Metaverse
    Aristodemou, Marios
    Liu, Xiaolan
    Lambotharan, Sangarapillai
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 6312 - 6317
  • [6] A Federated Weighted Learning Algorithm Against Poisoning Attacks
    Yafei Ning
    Zirui Zhang
    Hu Li
    Yuhan Xia
    Ming Li
    International Journal of Computational Intelligence Systems, 18 (1)
  • [7] Defending Against Poisoning Attacks in Federated Learning with Blockchain
    Dong N.
    Wang Z.
    Sun J.
    Kampffmeyer M.
    Knottenbelt W.
    Xing E.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (07): : 1 - 13
  • [8] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [9] Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
    Alkhunaizi, Naif
    Kamzolov, Dmitry
    Takac, Martin
    Nandakumar, Karthik
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VIII, 2022, 13438 : 673 - 683
  • [10] Defending Against Targeted Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 198 - 207