FEDCLEAN: A DEFENSE MECHANISM AGAINST PARAMETER POISONING ATTACKS IN FEDERATED LEARNING

被引:2
|
作者
Kumar, Abhishek [1 ]
Khimani, Vivek [3 ]
Chatzopoulos, Dimitris [4 ]
Hui, Pan [2 ,5 ]
机构
[1] Univ Oulu, Oulu, Finland
[2] Univ Helsinki, Helsinki, Finland
[3] Drexel Univ, Philadelphia, PA 19104 USA
[4] Univ Coll Dublin, Dublin, Ireland
[5] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
基金
芬兰科学院;
关键词
Federated learning; model poisoning; active learning; reputation;
D O I
10.1109/ICASSP43922.2022.9747497
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In Federated learning (FL) systems, a centralized entity (server), instead of access to the training data, has access to model parameter updates computed by each participant independently and based solely on their samples. Unfortunately, FL is susceptible to model poisoning attacks, in which malicious or malfunctioning entities share polluted updates that can compromise the model's accuracy. In this study, we propose FedClean, an FL mechanism that is robust to model poisoning attacks. The accuracy of the models trained with the assistance of FedClean is close to the one where malicious entities do not participate.
引用
收藏
页码:4333 / 4337
页数:5
相关论文
共 50 条
  • [1] Dynamic defense against byzantine poisoning attacks in federated learning
    Rodriguez-Barroso, Nuria
    Martinez-Camara, Eugenio
    Victoria Luzon, M.
    Herrera, Francisco
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 133 : 1 - 9
  • [2] RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks
    Yan, Haonan
    Zhang, Wenjing
    Chen, Qian
    Li, Xiaoguang
    Sun, Wenhai
    Li, Hui
    Lin, Xiaodong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Evaluation of Various Defense Techniques Against Targeted Poisoning Attacks in Federated Learning
    Richards, Charles
    Khemani, Sofia
    Li, Feng
    2022 IEEE 19TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2022), 2022, : 693 - 698
  • [4] Moat: Model Agnostic Defense against Targeted Poisoning Attacks in Federated Learning
    Manna, Arpan
    Kasyap, Harsh
    Tripathy, Somanath
    INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2021), PT I, 2021, 12918 : 38 - 55
  • [5] Decentralized Defense: Leveraging Blockchain against Poisoning Attacks in Federated Learning Systems
    Thennakoon, Rashmi
    Wanigasundara, Arosha
    Weerasinghe, Sanjaya
    Seneviratne, Chatura
    Siriwardhana, Yushan
    Liyanage, Madhusanka
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 950 - 955
  • [6] FLRAM: Robust Aggregation Technique for Defense against Byzantine Poisoning Attacks in Federated Learning
    Chen, Haitian
    Chen, Xuebin
    Peng, Lulu
    Ma, Ruikui
    ELECTRONICS, 2023, 12 (21)
  • [7] Defense against local model poisoning attacks to byzantine-robust federated learning
    Lu, Shiwei
    Li, Ruihu
    Chen, Xuan
    Ma, Yuena
    FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (06)
  • [8] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei LU
    Ruihu LI
    Xuan CHEN
    Yuena MA
    Frontiers of Computer Science, 2022, 16 (06) : 168 - 170
  • [9] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei Lu
    Ruihu Li
    Xuan Chen
    Yuena Ma
    Frontiers of Computer Science, 2022, 16
  • [10] Defending Against Poisoning Attacks in Federated Learning with Blockchain
    Dong N.
    Wang Z.
    Sun J.
    Kampffmeyer M.
    Knottenbelt W.
    Xing E.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (07): : 1 - 13