Securing Network Traffic Classification Models against Adversarial Examples Using Derived Variables

被引:1
|
作者
Adeke, James Msughter [1 ,2 ]
Liu, Guangjie [1 ,2 ]
Zhao, Junjie [1 ,2 ]
Wu, Nannan [3 ]
Bashir, Hafsat Muhammad [1 ]
Davoli, Franco
机构
[1] Nanjing Univ Informat Sci Technol, Sch Elect & Informat Engn, Nanjing 210044, Peoples R China
[2] Minist Educ, Key Lab Intelligent Support Technol Complex Enviro, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing 210044, Peoples R China
关键词
machine learning; adversarial attack; network traffic classification; derived variables; robustness; INTRUSION DETECTION;
D O I
10.3390/fi15120405
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) models are essential to securing communication networks. However, these models are vulnerable to adversarial examples (AEs), in which malicious inputs are modified by adversaries to produce the desired output. Adversarial training is an effective defense method against such attacks but relies on access to a substantial number of AEs, a prerequisite that entails significant computational resources and the inherent limitation of poor performance on clean data. To address these problems, this study proposes a novel approach to improve the robustness of ML-based network traffic classification models by integrating derived variables (DVars) into training. Unlike adversarial training, our approach focuses on enhancing training using DVars, introducing randomness into the input data. DVars are generated from the baseline dataset and significantly improve the resilience of the model to AEs. To evaluate the effectiveness of DVars, experiments were conducted using the CSE-CIC-IDS2018 dataset and three state-of-the-art ML-based models: decision tree (DT), random forest (RF), and k-neighbors (KNN). The results show that DVars can improve the accuracy of KNN under attack from 0.45% to 0.84% for low-intensity attacks and from 0.32% to 0.66% for high-intensity attacks. Furthermore, both DT and RF achieve a significant increase in accuracy when subjected to attack of different intensity. Moreover, DVars are computationally efficient, scalable, and do not require access to AEs.
引用
收藏
页数:21
相关论文
共 50 条
  • [41] Developing a Robust Defensive System against Adversarial Examples Using Generative Adversarial Networks
    Taheri, Shayan
    Khormali, Aminollah
    Salem, Milad
    Yuan, Jiann-Shiun
    [J]. BIG DATA AND COGNITIVE COMPUTING, 2020, 4 (02) : 1 - 15
  • [42] Securing Vision-Language Models with a Robust Encoder Against Jailbreak and Adversarial Attacks
    Hossain, Md Zarif
    Imteaj, Ahmed
    [J]. Proceedings - 2024 IEEE International Conference on Big Data, BigData 2024, 2024, : 6250 - 6259
  • [43] Adversarial Machine Learning Attacks on Multiclass Classification of IoT Network Traffic
    Pantelakis, Vasileios
    Bountakas, Panagiotis
    Farao, Aristeidis
    Xenakis, Christos
    [J]. 18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [44] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples
    Lee, Sungyoon
    Lee, Woojin
    Park, Jinseong
    Lee, Jaewook
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [45] A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
    Liu, Guanxiong
    Khalil, Issa
    Khreishah, Abdallah
    Phan, NhatHai
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 834 - 846
  • [46] Adversarial Examples Against the Deep Learning Based Network Intrusion Detection Systems
    Yang, Kaichen
    Liu, Jianqing
    Zhang, Chi
    Fang, Yuguang
    [J]. 2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 559 - 564
  • [47] Description of macroscopic relationships among traffic flow variables using neural network models
    Nakatsuji, Takashi
    Tanaka, Mitsuru
    Nasser, Pourmoallem
    Hagiwara, Toru
    [J]. Transportation Research Record, 1995, (1510): : 11 - 18
  • [48] Darknet traffic classification and adversarial attacks using machine learning
    Rust-Nguyen, Nhien
    Sharma, Shruti
    Stamp, Mark
    [J]. COMPUTERS & SECURITY, 2023, 127
  • [49] Federated Traffic Synthesizing and Classification Using Generative Adversarial Networks
    Xu, Chenxin
    Xia, Rong
    Xiao, Yong
    Li, Yingyu
    Shi, Guangming
    Chen, Kwang-cheng
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [50] Adversarial Examples Are Closely Relevant to Neural Network Models - A Preliminary Experiment Explore
    Zhou, Zheng
    Liu, Ju
    Han, Yanyang
    [J]. ADVANCES IN SWARM INTELLIGENCE, ICSI 2022, PT II, 2022, : 155 - 166