Perception Poisoning Attacks in Federated Learning

被引:10
|
作者
Chow, Ka-Ho [1 ]
Liu, Ling [1 ]
机构
[1] Georgia Inst Technol, Sch Comp Sci, Atlanta, GA 30332 USA
来源
2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021) | 2021年
基金
美国国家科学基金会;
关键词
federated learning; object detection; data poisoning; deep neural networks; DEFENSES;
D O I
10.1109/TPSISA52974.2021.00017
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) enables decentralized training of deep neural networks (DNNs) for object detection over a distributed population of clients. It allows edge clients to keep their data local and only share parameter updates with a federated server. However, the distributed nature of FL also opens doors to new threats. In this paper, we present targeted perception poisoning attacks against federated object detection learning in which a subset of malicious clients seeks to poison the federated training of a global object detection model by sharing perception-poisoned local model parameters. We first introduce three targeted perception poisoning attacks, which have severe adverse effects only on the objects under attack. We then analyze the attack feasibility, the impact of malicious client availability, and attack timing. To safeguard FL systems against such contagious threats, we introduce spatial signature analysis as a defense to separate benign local model parameters from poisoned local model contributions, identify malicious clients, and eliminate their impact on the federated training. Extensive experiments on object detection benchmark datasets validate that the defense-empowered federated object detection learning can improve the robustness against all three types of perception poisoning attacks. The source code is available at https://github.com/git-disl/Perception-Poisoning.
引用
收藏
页码:146 / 155
页数:10
相关论文
共 50 条
  • [21] Secure and verifiable federated learning against poisoning attacks in IoMT
    Niu, Shufen
    Zhou, Xusheng
    Wang, Ning
    Kong, Weiying
    Chen, Lihua
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 122
  • [22] Targeted Clean-Label Poisoning Attacks on Federated Learning
    Patel, Ayushi
    Singh, Priyanka
    RECENT TRENDS IN IMAGE PROCESSING AND PATTERN RECOGNITION, RTIP2R 2022, 2023, 1704 : 231 - 243
  • [23] Detection and Mitigation of Targeted Data Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH), 2022, : 271 - 278
  • [24] Poisoning Attacks in Federated Learning: An Evaluation on Traffic Sign Classification
    Nuding, Florian
    Mayer, Rudolf
    PROCEEDINGS OF THE TENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, CODASPY 2020, 2020, : 168 - 170
  • [25] Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning
    Tian, Yuchen
    Zhang, Weizhe
    Simpson, Andrew
    Liu, Yang
    Jiang, Zoe Lin
    COMPUTER JOURNAL, 2023, 66 (03): : 711 - 726
  • [26] Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
    Fang, Minghong
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Nenqiang
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1623 - 1640
  • [27] Toward Efficient and Certified Recovery From Poisoning Attacks in Federated Learning
    Jiang, Yu
    Shen, Jiyuan
    Liu, Ziyao
    Tan, Chee Wei
    Lam, Kwok-Yan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2632 - 2647
  • [28] A survey on privacy-preserving federated learning against poisoning attacks
    Xia, Feng
    Cheng, Wenhao
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (10): : 13565 - 13582
  • [29] MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 3395 - 3403
  • [30] Collusion-Based Poisoning Attacks Against Blockchained Federated Learning
    Zhang, Xiaohui
    Shen, Tao
    Bai, Fenhua
    Zhang, Chi
    IEEE NETWORK, 2023, 37 (06): : 50 - 57