MATFL: Defending Against Synergetic Attacks in Federated Learning

被引:0
|
作者
Yang, Wen [1 ,2 ]
Peng, Luyao [1 ,2 ]
Tang, Xiangyun [1 ,2 ]
Weng, Yu [1 ,2 ]
机构
[1] Minzu Univ China, Sch Informat Engn, Beijing, Peoples R China
[2] Minzu Univ China, Key Lab Ethn Language Intelligent Anal & Secur Go, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
federated learning; synergetic attacks; defence; adversarial samples; backdoor;
D O I
10.1109/iThings-GreenCom-CPSCom-SmartData-Cybermatics60724.2023.00072
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning(FL) is a promising distributed learning architecture. However, it faces significant threats from malicious attacks, including adversarial samples and backdoor attacks. Although some work has proposed defences against these two types of attacks, there are already attacks that combine the two, known as synergetic attacks. This synergetic attack typically uses adversarial samples to create triggers and then implants a trojan into the global model via a backdoor attack. which has not been defended by previous single defence strategies and has not received any attention. To the best of our knowledge, we are the first to focus on this type of synergistic attacks in FL. To address this issue, we propose MATFL, which introduces majority aggregation into the adversarial learning framework. We conduct extensive experiments to analyze the effectiveness and aggregation efficiency of MATFL considering five defense methods across four attack scenarios. The results demonstrate that our MATFL can effectively defend against synergetic attacks while striking a balance between defence effectiveness, global model accuracy, and aggregation efficiency.
引用
收藏
页码:313 / 319
页数:7
相关论文
共 50 条
  • [41] A Selective Defense Strategy for Federated Learning Against Attacks
    Chen Z.
    Jiang H.
    Zhou Y.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (03): : 1119 - 1127
  • [42] Defending non-Bayesian learning against adversarial attacks
    Lili Su
    Nitin H. Vaidya
    Distributed Computing, 2019, 32 : 277 - 289
  • [43] Defending non-Bayesian learning against adversarial attacks
    Su, Lili
    Vaidya, Nitin H.
    DISTRIBUTED COMPUTING, 2019, 32 (04) : 277 - 289
  • [44] FLSAD: Defending Backdoor Attacks in Federated Learning via Self-Attention Distillation
    Chen, Lucheng
    Liu, Xiaoshuang
    Wang, Ailing
    Zhai, Weiwei
    Cheng, Xiang
    SYMMETRY-BASEL, 2024, 16 (11):
  • [45] LFighter: Defending against the label-flipping attack in federated learning
    Jebreel, Najeeb Moharram
    Domingo-Ferrer, Josep
    Sanchez, David
    Blanco-Justicia, Alberto
    NEURAL NETWORKS, 2024, 170 : 111 - 126
  • [46] Defending Byzantine attacks in ensemble federated learning: A reputation-based phishing approach
    Li, Beibei
    Wang, Peiran
    Shao, Zerui
    Liu, Ao
    Jiang, Yukun
    Li, Yizhou
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 147 : 136 - 148
  • [47] Defending Batch-Level Label Inference and Replacement Attacks in Vertical Federated Learning
    Zou, Tianyuan
    Liu, Yang
    Kang, Yan
    Liu, Wenhan
    He, Yuanqin
    Yi, Zhihao
    Yang, Qiang
    Zhang, Ya-Qin
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 1016 - 1027
  • [48] Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data
    Ziegler, Joceline
    Pfitzner, Bjarne
    Schulz, Heinrich
    Saalbach, Axel
    Arnrich, Bert
    SENSORS, 2022, 22 (14)
  • [49] Practical and General Backdoor Attacks Against Vertical Federated Learning
    Xuan, Yuexin
    Chen, Xiaojun
    Zhao, Zhendong
    Tang, Bisheng
    Dong, Ye
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 402 - 417
  • [50] Regulated Federated Learning Against the Effects of Heterogeneity and Client Attacks
    Hu, Fei
    Zhou, Wuneng
    Liao, Kaili
    Li, Hongliang
    Tong, Dongbing
    IEEE INTELLIGENT SYSTEMS, 2024, 39 (06) : 28 - 39