Impacting Robustness in Deep Learning-Based NIDS through Poisoning Attacks

被引:0
|
作者
Alahmed, Shahad [1 ]
Alasad, Qutaiba [2 ]
Yuan, Jiann-Shiun [3 ]
Alawad, Mohammed [4 ]
机构
[1] Tikrit Univ, Dept Comp Sci, POB 42, Al Qadisiyah, Iraq
[2] Tikrit Univ, Dept Petr Proc Engn, POB 42, Al Qadisiyah, Iraq
[3] Univ Cent Florida, Dept Elect & Comp Engn, Orlando, FL 32816 USA
[4] Wayne State Univ, Dept Elect & Comp Engn, Detroit, MI 48202 USA
关键词
deep learning; network intrusion detection system (NIDS); deep fool; poisoning attacks; pearson correlation method; CICIDS2019; NETWORK;
D O I
10.3390/a17040155
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid expansion and pervasive reach of the internet in recent years have raised concerns about evolving and adaptable online threats, particularly with the extensive integration of Machine Learning (ML) systems into our daily routines. These systems are increasingly becoming targets of malicious attacks that seek to distort their functionality through the concept of poisoning. Such attacks aim to warp the intended operations of these services, deviating them from their true purpose. Poisoning renders systems susceptible to unauthorized access, enabling illicit users to masquerade as legitimate ones, compromising the integrity of smart technology-based systems like Network Intrusion Detection Systems (NIDSs). Therefore, it is necessary to continue working on studying the resilience of deep learning network systems while there are poisoning attacks, specifically interfering with the integrity of data conveyed over networks. This paper explores the resilience of deep learning (DL)-based NIDSs against untethered white-box attacks. More specifically, it introduces a designed poisoning attack technique geared especially for deep learning by adding various amounts of altered instances into training datasets at diverse rates and then investigating the attack's influence on model performance. We observe that increasing injection rates (from 1% to 50%) and random amplified distribution have slightly affected the overall performance of the system, which is represented by accuracy (0.93) at the end of the experiments. However, the rest of the results related to the other measures, such as PPV (0.082), FPR (0.29), and MSE (0.67), indicate that the data manipulation poisoning attacks impact the deep learning model. These findings shed light on the vulnerability of DL-based NIDS under poisoning attacks, emphasizing the significance of securing such systems against these sophisticated threats, for which defense techniques should be considered. Our analysis, supported by experimental results, shows that the generated poisoned data have significantly impacted the model performance and are hard to be detected.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Evaluating Deep Learning-based NIDS in Adversarial Settings
    Mohammadian, Hesamodin
    Lashkari, Arash Habibi
    Ghorbani, Ali A.
    [J]. PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON INFORMATION SYSTEMS SECURITY AND PRIVACY (ICISSP), 2021, : 435 - 444
  • [2] Robustness of Deep Learning-Based Specific Emitter Identification under Adversarial Attacks
    Sun, Liting
    Ke, Da
    Wang, Xiang
    Huang, Zhitao
    Huang, Kaizhu
    [J]. REMOTE SENSING, 2022, 14 (19)
  • [3] Exploring Data and Model Poisoning Attacks to Deep Learning-Based NLP Systems
    Marulli, Fiammetta
    Verde, Laura
    Campanile, Lelio
    [J]. KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KSE 2021), 2021, 192 : 3570 - 3579
  • [4] Evaluating Label Flipping Attack in Deep Learning-Based NIDS
    Mohammadian, Hesamodin
    Lashkari, Arash Habibi
    Ghorbani, Ali A.
    [J]. PROCEEDINGS OF THE 20TH INTERNATIONAL CONFERENCE ON SECURITY AND CRYPTOGRAPHY, SECRYPT 2023, 2023, : 597 - 603
  • [5] Vulnerabilities Assessment of Deep Learning-Based Fake News Checker Under Poisoning Attacks
    Campanile, Lelio
    Cantiello, Pasquale
    Iacono, Mauro
    Marulli, Fiammetta
    Mastroianni, Michele
    [J]. COMPUTATIONAL DATA AND SOCIAL NETWORKS, CSONET 2021, 2021, 13116 : 385 - 386
  • [6] On the Robustness of Deep Learning-Based Speech Enhancement
    Chhetri, Amit S.
    Hilmes, Philip
    Athi, Mrudula
    Shankar, Nikhil
    [J]. 2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1587 - 1594
  • [7] Certified Robustness of Static Deep Learning-based Malware Detectors against Patch and Append Attacks
    Gibert, Daniel
    Zizzo, Giulio
    Le, Quan
    [J]. PROCEEDINGS OF THE 16TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2023, 2023, : 173 - 184
  • [8] A novel method for improving the robustness of deep learning-based malware detectors against adversarial attacks
    Shaukat, Kamran
    Luo, Suhuai
    Varadharajan, Vijay
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
  • [9] Parameterizing poisoning attacks in federated learning-based intrusion detection
    Merzouk, Mohamed Amine
    Cuppens, Frederic
    Boulahia-Cuppens, Nora
    Yaich, Reda
    [J]. 18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [10] Deep Learning-based Attacks on Masked AES Implementation
    Daehyeon, Bae
    Hwang, Jongbae
    Ha, Jaecheol
    [J]. JOURNAL OF INTERNET TECHNOLOGY, 2022, 23 (04): : 897 - 902