Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection

被引:13
|
作者
Lai, Yuan-Cheng [1 ]
Lin, Jheng-Yan [1 ]
Lin, Ying-Dar [2 ]
Hwang, Ren-Hung [3 ]
Lin, Po-Chin [4 ]
Wu, Hsiao-Kuang [5 ]
Chen, Chung-Kuan [6 ]
机构
[1] Natl Taiwan Univ Sci & Technol, Dept Informat Management, Taipei, Taiwan
[2] Natl Yang Ming Chiao Tung Univ, Dept Comp Sci, Hsinchu, Taiwan
[3] Natl Yang Ming Chiao Tung Univ, Coll Artificial Intelligence, Tainan, Taiwan
[4] Natl Chung Cheng Univ, Dept Comp Sci & Informat Engn, Chiayi, Taiwan
[5] Natl Cent Univ, Dept Comp Sci & Informat Engn, Taoyuan, Taiwan
[6] Cycraft Technol, Taipei, Taiwan
关键词
Federated Learning; Intrusion Detection; Poisoning Attack; Backdoor Attack; Local Outlier Factor;
D O I
10.1016/j.cose.2023.103205
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Machine Learning-based Intrusion Detection System (ML-IDS) becomes more popular because it doesn't need to manually update the rules and can recognize variants better, However, due to the data privacy issue in ML-IDS, the Federated Learning-based IDS (FL-IDS) was proposed. In each round of federated learning, each participant first trains its local model and sends the model's weights to the global server, which then aggregates the received weights and distributes the aggregated global model to participants. An attacker will use poisoning attacks, including label-flipping attacks and backdoor attacks, to directly generate a malicious local model and indirectly pollute the global model. Currently, a few studies defend against poisoning attacks, but they only discuss label-flipping attacks in the image field. Therefore, we propose a two-phase defense mechanism, called Defending Poisoning Attacks in Federated Learning (DPA-FL), applied to intrusion detection. The first phase employs relative differences to quickly compare weights between participants because the local models of attackers and benign participants are quite different. The second phase tests the aggregated model with the dataset and tries to find the attackers when its accuracy is low. Experiment results show that DPA-FL can reach 96.5% accuracy in defending against poisoning attacks. Compared with other defense mechanisms, DPA-FL can improve F1-score by 20 similar to 64% under backdoor attacks. Also, DPA-FL can exclude the attackers within twelve rounds when the attackers are few.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Parameterizing poisoning attacks in federated learning-based intrusion detection
    Merzouk, Mohamed Amine
    Cuppens, Frederic
    Boulahia-Cuppens, Nora
    Yaich, Reda
    18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [2] Personalized federated learning-based intrusion detection system: Poisoning attack and defense
    Thein, Thin Tharaphe
    Shiraishi, Yoshiaki
    Morii, Masakatu
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 153 : 182 - 192
  • [3] SecFedNIDS: Robust defense for poisoning attack against federated learning-based network intrusion detection system
    Zhang, Zhao
    Zhang, Yong
    Guo, Da
    Yao, Lei
    Li, Zhao
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 134 : 154 - 169
  • [4] Dependable federated learning for IoT intrusion detection against poisoning attacks
    Yang, Run
    He, Hui
    Wang, Yulong
    Qu, Yue
    Zhang, Weizhe
    COMPUTERS & SECURITY, 2023, 132
  • [5] Federated Learning-Based Intrusion Detection in the Context of IIoT Networks: Poisoning Attack and Defense
    Nguyen Chi Vy
    Nguyen Huu Quyen
    Phan The Duy
    Van-Hau Pham
    NETWORK AND SYSTEM SECURITY, NSS 2021, 2021, 13041 : 131 - 147
  • [6] FedDef: Defense Against Gradient Leakage in Federated Learning-Based Network Intrusion Detection Systems
    Chen, Jiahui
    Zhao, Yi
    Li, Qi
    Feng, Xuewei
    Xu, Ke
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4561 - 4576
  • [7] Dynamic defense against byzantine poisoning attacks in federated learning
    Rodriguez-Barroso, Nuria
    Martinez-Camara, Eugenio
    Victoria Luzon, M.
    Herrera, Francisco
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 133 : 1 - 9
  • [8] Adversarial Attacks Against Deep Learning-Based Network Intrusion Detection Systems and Defense Mechanisms
    Zhang, Chaoyun
    Costa-Perez, Xavier
    Patras, Paul
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2022, 30 (03) : 1294 - 1311
  • [9] FEDCLEAN: A DEFENSE MECHANISM AGAINST PARAMETER POISONING ATTACKS IN FEDERATED LEARNING
    Kumar, Abhishek
    Khimani, Vivek
    Chatzopoulos, Dimitris
    Hui, Pan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4333 - 4337
  • [10] ENSEMBLE ADVERSARIAL TRAINING BASED DEFENSE AGAINST ADVERSARIAL ATTACKS FOR MACHINE LEARNING-BASED INTRUSION DETECTION SYSTEM
    Haroon, M. S.
    Ali, H. M.
    NEURAL NETWORK WORLD, 2023, 33 (05) : 317 - 336