Impacting Robustness in Deep Learning-Based NIDS through Poisoning Attacks

被引:0
|
作者
Alahmed, Shahad [1 ]
Alasad, Qutaiba [2 ]
Yuan, Jiann-Shiun [3 ]
Alawad, Mohammed [4 ]
机构
[1] Tikrit Univ, Dept Comp Sci, POB 42, Al Qadisiyah, Iraq
[2] Tikrit Univ, Dept Petr Proc Engn, POB 42, Al Qadisiyah, Iraq
[3] Univ Cent Florida, Dept Elect & Comp Engn, Orlando, FL 32816 USA
[4] Wayne State Univ, Dept Elect & Comp Engn, Detroit, MI 48202 USA
关键词
deep learning; network intrusion detection system (NIDS); deep fool; poisoning attacks; pearson correlation method; CICIDS2019; NETWORK;
D O I
10.3390/a17040155
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid expansion and pervasive reach of the internet in recent years have raised concerns about evolving and adaptable online threats, particularly with the extensive integration of Machine Learning (ML) systems into our daily routines. These systems are increasingly becoming targets of malicious attacks that seek to distort their functionality through the concept of poisoning. Such attacks aim to warp the intended operations of these services, deviating them from their true purpose. Poisoning renders systems susceptible to unauthorized access, enabling illicit users to masquerade as legitimate ones, compromising the integrity of smart technology-based systems like Network Intrusion Detection Systems (NIDSs). Therefore, it is necessary to continue working on studying the resilience of deep learning network systems while there are poisoning attacks, specifically interfering with the integrity of data conveyed over networks. This paper explores the resilience of deep learning (DL)-based NIDSs against untethered white-box attacks. More specifically, it introduces a designed poisoning attack technique geared especially for deep learning by adding various amounts of altered instances into training datasets at diverse rates and then investigating the attack's influence on model performance. We observe that increasing injection rates (from 1% to 50%) and random amplified distribution have slightly affected the overall performance of the system, which is represented by accuracy (0.93) at the end of the experiments. However, the rest of the results related to the other measures, such as PPV (0.082), FPR (0.29), and MSE (0.67), indicate that the data manipulation poisoning attacks impact the deep learning model. These findings shed light on the vulnerability of DL-based NIDS under poisoning attacks, emphasizing the significance of securing such systems against these sophisticated threats, for which defense techniques should be considered. Our analysis, supported by experimental results, shows that the generated poisoned data have significantly impacted the model performance and are hard to be detected.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] Adversarial Attacks on Deep Learning-Based UAV Navigation Systems
    Mynuddin, Mohammed
    Khan, Sultan Uddin
    Mahmoud, Nabil Mahmoud
    Alsharif, Ahmad
    [J]. 2023 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS, 2023,
  • [22] How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?
    Wang, Su
    Sahay, Rajeev
    Brinton, Christopher G.
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 2376 - 2381
  • [23] Reward poisoning attacks in deep reinforcement learning based on exploration strategies
    Cai, Kanting
    Zhu, Xiangbin
    Hu, Zhaolong
    [J]. NEUROCOMPUTING, 2023, 553
  • [24] Adversarial Robustness of Distilled and Pruned Deep Learning-based Wireless Classifiers
    Baishya, Nayan Moni
    Manoj, B. R.
    [J]. 2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [25] Towards Strengthening Deep Learning-based Side Channel Attacks with Mixup
    Luo, Zhimin
    Zheng, Mengce
    Wang, Ping
    Jin, Minhui
    Zhang, Jiajia
    Hu, Honggang
    [J]. 2021 IEEE 20TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2021), 2021, : 791 - 801
  • [26] AIR: Threats of Adversarial Attacks on Deep Learning-Based Information Recovery
    Chen, Jinyin
    Ge, Jie
    Zheng, Shilian
    Ye, Linhui
    Zheng, Haibin
    Shen, Weiguo
    Yue, Keqiang
    Yang, Xiaoniu
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (09) : 10698 - 10711
  • [27] Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models
    Lin, Chih-Yang
    Chen, Feng-Jie
    Ng, Hui-Fuang
    Lin, Wei-Yang
    [J]. IEEE ACCESS, 2023, 11 : 51567 - 51577
  • [28] Adversarial Attacks on Deep Learning-Based Methods for Network Traffic Classification
    Li, Meimei
    Xu, Yiyan
    Li, Nan
    Jin, Zhongfeng
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 1123 - 1128
  • [29] Adversarial Attacks on Deep Learning-Based DOA Estimation With Covariance Input
    Yang, Zhuang
    Zheng, Shilian
    Zhang, Luxin
    Zhao, Zhijin
    Yang, Xiaoniu
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1377 - 1381
  • [30] Backdoor Attacks Against Deep Learning-based Massive MIMO Localization
    Zhao, Tianya
    Wang, Xuyu
    Mao, Shiwen
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 2796 - 2801