A Detailed Survey on Federated Learning Attacks and Defenses

被引:16
|
作者
Sikandar, Hira Shahzadi [1 ]
Waheed, Huda [1 ]
Tahir, Sibgha [1 ]
Malik, Saif U. R. [2 ]
Rafique, Waqas [3 ]
机构
[1] COMSATS Univ Islamabad, Dept Comp Sci, Islamabad 45550, Pakistan
[2] Cybernet AS, EE-13412 Tallinn, Estonia
[3] UCL, Dept Comp Sci, London WC1E 6BT, England
关键词
federated learning (FL); machine learning (ML); FL attacks; defensive mechanisms;
D O I
10.3390/electronics12020260
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A traditional centralized method of training AI models has been put to the test by the emergence of data stores and public privacy concerns. To overcome these issues, the federated learning (FL) approach was introduced. FL employs a privacy-by-design architecture to train deep neural networks utilizing decentralized data, in which numerous devices collectively build any machine learning system that does not reveal users' personal information under the supervision of a centralized server. While federated learning (FL), as a machine learning (ML) strategy, may be effective for safeguarding the confidentiality of local data, it is also vulnerable to attacks. Increased interest in the FL domain inspired us to write this paper, which informs readers of the numerous threats to and flaws in the federated learning strategy, and introduces a multiple-defense mechanism that can be employed to fend off threats.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions
    Nguyen, Thuy Dung
    Nguyen, Tuan
    Nguyen, Phi Le
    Pham, Hieu H.
    Doan, Khoa D.
    Wong, Kok-Seng
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
  • [2] Privacy and Robustness in Federated Learning: Attacks and Defenses
    Lyu, Lingjuan
    Yu, Han
    Ma, Xingjun
    Chen, Chen
    Sun, Lichao
    Zhao, Jun
    Yang, Qiang
    Yu, Philip S.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (07) : 8726 - 8746
  • [3] Improved Gradient Inversion Attacks and Defenses in Federated Learning
    Geng, Jiahui
    Mou, Yongli
    Li, Qing
    Li, Feifei
    Beyan, Oya
    Decker, Stefan
    Rong, Chunming
    [J]. IEEE Transactions on Big Data, 2024, 10 (06): : 839 - 850
  • [4] An Investigation of Recent Backdoor Attacks and Defenses in Federated Learning
    Chen, Qiuxian
    Tao, Yizheng
    [J]. 2023 EIGHTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC, 2023, : 262 - 269
  • [5] Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
    Huang, Yangsibo
    Gupta, Samyak
    Song, Zhao
    Li, Kai
    Arora, Sanjeev
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [6] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    [J]. IEEE ACCESS, 2023, 11 : 10708 - 10722
  • [7] A review on client-server attacks and defenses in federated learning
    Sharma, Anee
    Marchang, Ningrinla
    [J]. COMPUTERS & SECURITY, 2024, 140
  • [8] Federated Learning: A Comparative Study of Defenses Against Poisoning Attacks
    Carvalho, Inês
    Huff, Kenton
    Gruenwald, Le
    Bernardino, Jorge
    [J]. Applied Sciences (Switzerland), 2024, 14 (22):
  • [9] Review of Deep Gradient Inversion Attacks and Defenses in Federated Learning
    Sun Y.
    Yan Y.
    Cui J.
    Xiong G.
    Liu J.
    [J]. Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (02): : 428 - 442
  • [10] Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives
    Liu, Pengrui
    Xu, Xiangrui
    Wang, Wei
    [J]. CYBERSECURITY, 2022, 5 (01)