Defending Network IDS against Adversarial Examples with Continual Learning

被引:0
|
作者
Kozal, Jedrzej [1 ]
Zwolinska, Justyna [1 ]
Klonowski, Marek [2 ]
Wozniak, Michal [1 ]
机构
[1] Wroclaw Univ Sci & Technol, Dept Syst & Comp Networks, Wroclaw, Poland
[2] Wroclaw Univ Sci & Technol, Dept Artificial Intelligence, Wroclaw, Poland
关键词
adversarial examples; computer security; continual learning; machine learning;
D O I
10.1109/ICDMW60847.2023.00017
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Improving computer system security is one of the most critical issues in modern computer science. Machine learning algorithms increasingly support the construction of such solutions, mainly to detect network attacks. Such algorithms detect dangerous activity without requiring the manual formulation of expert rules to decide what activity constitutes an attack. Due to the emergence of new attacks, IDS (Intrusion Detection System) needs to be updated periodically. However, their update carried out without due care may also be exploited by an attacker who deliberately tries to influence the analyzed data to mislead the predictive model. The proposed approach uses adversarial examples to generate new network traffic patterns that are misclassified by the neural network inside the IDS. This approach could model the evolution of cyber threats to some extent and should allow for continuous improvement of the IDS. In this paper, we propose an original framework for simulating attacker-defender dynamics based on adversarial examples and show that it is possible to continuously improve IDS systems by applying continual learning strategies. The proposed approach has been tested based on experimental studies using known continual learning algorithms, and the experimental results confirm the usability of the proposed method. The results presented in this paper identify potential gaps in ML-based NIDS systems. At the same time, we show how these threats can be limited, which should contribute to mitigating some possible threads and the overall reliability of the intrusion detection process.
引用
收藏
页码:60 / 69
页数:10
相关论文
共 50 条
  • [1] Feature decoupling and interaction network for defending against adversarial examples
    Wang, Weidong
    Li, Zhi
    Liu, Shuaiwei
    Zhang, Li
    Yang, Jin
    Wang, Yi
    [J]. IMAGE AND VISION COMPUTING, 2024, 144
  • [2] Dynamic and Diverse Transformations for Defending Against Adversarial Examples
    Chen, Yongkang
    Zhang, Ming
    Li, Jin
    Kuang, Xiaohui
    Zhang, Xuhong
    Zhang, Han
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 976 - 983
  • [3] DCAL: A New Method for Defending Against Adversarial Examples
    Lin, Xiaoyu
    Cao, Chunjie
    Wang, Longjuan
    Liu, Zhiyuan
    Li, Mengqian
    Ma, Haiying
    [J]. ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT II, 2022, 13339 : 38 - 50
  • [4] Defending Against Model Inversion Attack by Adversarial Examples
    Wen, Jing
    Yiu, Siu-Ming
    Hui, Lucas C. K.
    [J]. PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE (IEEE CSR), 2021, : 551 - 556
  • [5] Defending against Deep-Learning-Based Flow Correlation Attacks with Adversarial Examples
    Zhang, Ziwei
    Ye, Dengpan
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [6] Defending Against Adversarial Iris Examples Using Wavelet Decomposition
    Soleymani, Sobhan
    Dabouei, Ali
    Dawson, Jeremy
    Nasrabadi, Nasser M.
    [J]. 2019 IEEE 10TH INTERNATIONAL CONFERENCE ON BIOMETRICS THEORY, APPLICATIONS AND SYSTEMS (BTAS), 2019,
  • [7] Defending against adversarial examples using perceptual image hashing
    Wu, Ke
    Wang, Zichi
    Zhang, Xinpeng
    Tang, Zhenjun
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (02)
  • [8] DeT: Defending Against Adversarial Examples via Decreasing Transferability
    Li, Changjiang
    Weng, Haiqin
    Ji, Shouling
    Dong, Jianfeng
    He, Qinming
    [J]. CYBERSPACE SAFETY AND SECURITY, PT I, 2020, 11982 : 307 - 322
  • [9] Neuron Selecting: Defending Against Adversarial Examples in Deep Neural Networks
    Zhang, Ming
    Li, Hu
    Kuang, Xiaohui
    Pang, Ling
    Wu, Zhendong
    [J]. INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2019), 2020, 11999 : 613 - 629
  • [10] HF-Defend: Defending Against Adversarial Examples Based on Halftoning
    Liu, Gaozhi
    Li, Sheng
    Qian, Zhenxing
    Zhang, Xinpeng
    [J]. 2022 IEEE 24TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2022,