LFGurad: A Defense against Label Flipping Attack in Federated Learning for Vehicular Network

被引:1
|
作者
Sameera, K. M. [1 ]
Vinod, P. [1 ,2 ]
Rehiman, K. A. Rafidha [1 ]
Conti, Mauro [2 ]
机构
[1] Cochin Univ Sci & Technol, Dept Comp Applicat, Cochin, India
[2] Univ Padua, Dept Math, Padua, Italy
关键词
Federated Learning; Poisoning Attack; Label Flipping; Defense; Support Vector Machine; DEEP; INTERNET; BLOCKCHAIN; SECURITY; PRIVACY;
D O I
10.1016/j.comnet.2024.110768
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The explosive growth of the interconnected vehicle network creates vast amounts of data within individual vehicles, offering exciting opportunities to develop advanced applications. FL (Federated Learning) is a game-changer for vehicular networks, enabling powerful distributed data processing across vehicles to build intelligent applications while promoting collaborative training and safeguarding data privacy. However, recent research has exposed a critical vulnerability in FL: poisoning attacks, where malicious actors can manipulate data, labels, or models to subvert the system. Despite its advantages, deploying FL in dynamic vehicular environments with a multitude of distributed vehicles presents unique challenges. One such challenge is the potential for a significant number of malicious actors to tamper with data. We propose a hierarchical FL framework for vehicular networks to address these challenges, promising lower latency and coverage. We also present a defense mechanism, LFGuard, which employs a detection system to pinpoint malicious vehicles. It then excludes their local models from the aggregation stage, significantly reducing their influence on the final outcome. We evaluate LFGuard against state-of-the-art techniques using the three popular benchmark datasets in a heterogeneous environment. Results illustrate LFGuard outperforms prior studies in thwarting targeted label-flipping attacks with more than 5% improvement in the global model accuracy, 12% in the source class recall, and a 6% reduction in the attack success rate while maintaining high model utility.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] ADFL: A Poisoning Attack Defense Framework for Horizontal Federated Learning
    Guo, Jingjing
    Li, Haiyang
    Huang, Feiran
    Liu, Zhiquan
    Peng, Yanguo
    Li, Xinghua
    Ma, Jianfeng
    Menon, Varun G.
    Igorevich, Konstantin Kostromitin
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (10) : 6526 - 6536
  • [42] Survey of Backdoor Attack and Defense Algorithms Based on Federated Learning
    Liu, Jialang
    Guo, Yanming
    Lao, Mingrui
    Yu, Tianyuan
    Wu, Yulun
    Feng, Yunhao
    Wu, Jiazhuang
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (10): : 2607 - 2626
  • [43] Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning
    He, Ying
    Shen, Zhili
    Hua, Jingyu
    Dong, Qixuan
    Niu, Jiacheng
    Tong, Wei
    Huang, Xu
    Li, Chen
    Zhong, Sheng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 748 - 763
  • [44] Federated Split Learning With Data and Label Privacy Preservation in Vehicular Networks
    Wu, Maoqiang
    Cheng, Guoliang
    Ye, Dongdong
    Kang, Jiawen
    Yu, Rong
    Wu, Yuan
    Pan, Miao
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (01) : 1223 - 1238
  • [45] Against network attacks in renewable power plants: Malicious behavior defense for federated learning
    Wu, Xiaodong
    Jin, Zhigang
    Zhou, Junyi
    Liu, Kai
    Liu, Zepei
    COMPUTER NETWORKS, 2024, 250
  • [46] Label-Only Membership Inference Attack Against Federated Distillation
    Wang, Xi
    Zhao, Yanchao
    Zhang, Jiale
    Chen, Bing
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT II, 2024, 14488 : 394 - 410
  • [47] Practical Attribute Reconstruction Attack Against Federated Learning
    Chen, Chen
    Lyu, Lingjuan
    Yu, Han
    Chen, Gang
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 851 - 863
  • [48] Attacks against Federated Learning Defense Systems and their Mitigation
    Lewis, Cody
    Varadharajan, Vijay
    Noman, Nasimul
    JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [49] CLB-Defense: based on contrastive learning defense for graph neural network against backdoor attack
    Chen J.
    Xiong H.
    Ma H.
    Zheng Y.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (04): : 154 - 166
  • [50] A Selective Defense Strategy for Federated Learning Against Attacks
    Chen Z.
    Jiang H.
    Zhou Y.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (03): : 1119 - 1127