DART: : A Solution for decentralized federated learning model robustness analysis

被引:0
|
作者
Feng, Chao [1 ]
Celdran, Alberto Huertas [1 ]
von der Assen, Jan [1 ]
Beltran, Enrique Tomas Martinez [2 ]
Bovet, Gerome [3 ]
Stiller, Burkhard [1 ]
机构
[1] Univ Zurich UZH, Dept Informat IfI, Commun Syst Grp CSG, CH-8050 Zurich, Switzerland
[2] Univ Murcia, Dept Informat & Commun Engn, Murcia 30100, Spain
[3] Armasuisse Sci & Technol, Cyber Def Campus, CH-3602 Thun, Switzerland
关键词
Decentralized federated learning; Poisoning attack; Cybersecurity; Model robustness; TAXONOMY; ATTACKS; PRIVACY;
D O I
10.1016/j.array.2024.100360
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated Learning (FL) has emerged as a promising approach to address privacy concerns inherent in Machine Learning (ML) practices. However, conventional FL methods, particularly those following the Centralized (CFL) paradigm, utilize a central server for global aggregation, which exhibits limitations such as bottleneck and single point of failure. To address these issues, the Decentralized FL (DFL) paradigm has been proposed, which removes the client-server boundary and enables all participants to engage in model training aggregation tasks. Nevertheless, as CFL, DFL remains vulnerable to adversarial attacks, notably poisoning attacks that undermine model performance. While existing research on model robustness has predominantly focused on CFL, there is a noteworthy gap in understanding the model robustness of the DFL paradigm. In paper, a thorough review of poisoning attacks targeting the model robustness in DFL systems, as well as their corresponding countermeasures, are presented. Additionally, a solution called DART is proposed to evaluate the robustness of DFL models, which is implemented and integrated into a DFL platform. Through extensive experiments, this paper compares the behavior of CFL and DFL under diverse poisoning attacks, pinpointing key factors affecting attack spread and effectiveness within the DFL. It also evaluates the performance different defense mechanisms and investigates whether defense mechanisms designed for CFL are compatible with DFL. The empirical results provide insights into research challenges and suggest ways to improve robustness of DFL models for future research.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Decentralized federated learning through proxy model sharing
    Kalra, Shivam
    Wen, Junfeng
    Cresswell, Jesse C.
    Volkovs, Maksims
    Tizhoosh, H. R.
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [2] Decentralized Federated Graph Learning via Surrogate Model
    Zhang, Bolin
    Gu, Ruichun
    Liu, Haiying
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (02): : 2521 - 2535
  • [3] Decentralized federated learning through proxy model sharing
    Shivam Kalra
    Junfeng Wen
    Jesse C. Cresswell
    Maksims Volkovs
    H. R. Tizhoosh
    Nature Communications, 14 (1)
  • [4] Decentralized Federated Learning Preserves Model and Data Privacy
    Wittkopp, Thorsten
    Acker, Alexander
    SERVICE-ORIENTED COMPUTING, ICSOC 2020, 2021, 12632 : 176 - 187
  • [5] Collusive Model Poisoning Attack in Decentralized Federated Learning
    Tan, Shouhong
    Hao, Fengrui
    Gu, Tianlong
    Li, Long
    Liu, Ming
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (04) : 5989 - 5999
  • [6] Detecting model misconducts in decentralized healthcare federated learning
    Kuo, Tsung-Ting
    Pham, Anh
    INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2022, 158
  • [7] Spread: Decentralized Model Aggregation for Scalable Federated Learning
    Hu, Chuang
    Liang, Huanghuang
    Han, Xiaoming
    Liu, Boan
    Cheng, Dazhao
    Wang, Dan
    51ST INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2022, 2022,
  • [8] Federated Robustness Propagation: Sharing Adversarial Robustness in Heterogeneous Federated Learning
    Hong, Junyuan
    Wang, Haotao
    Wang, Zhangyang
    Zhou, Jiayu
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 7893 - 7901
  • [9] A polynomial proxy model approach to verifiable decentralized federated learning
    Li, Tan
    Cheng, Samuel
    Chan, Tak Lam
    Hu, Haibo
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [10] Adaptive Model Aggregation for Decentralized Federated Learning in Vehicular Networks
    Movahedian, Mahtab
    Dolati, Mahdi
    Ghaderi, Majid
    2023 19TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT, CNSM, 2023,