Securing Network Traffic Classification Models against Adversarial Examples Using Derived Variables

被引:1
|
作者
Adeke, James Msughter [1 ,2 ]
Liu, Guangjie [1 ,2 ]
Zhao, Junjie [1 ,2 ]
Wu, Nannan [3 ]
Bashir, Hafsat Muhammad [1 ]
Davoli, Franco
机构
[1] Nanjing Univ Informat Sci Technol, Sch Elect & Informat Engn, Nanjing 210044, Peoples R China
[2] Minist Educ, Key Lab Intelligent Support Technol Complex Enviro, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing 210044, Peoples R China
关键词
machine learning; adversarial attack; network traffic classification; derived variables; robustness; INTRUSION DETECTION;
D O I
10.3390/fi15120405
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) models are essential to securing communication networks. However, these models are vulnerable to adversarial examples (AEs), in which malicious inputs are modified by adversaries to produce the desired output. Adversarial training is an effective defense method against such attacks but relies on access to a substantial number of AEs, a prerequisite that entails significant computational resources and the inherent limitation of poor performance on clean data. To address these problems, this study proposes a novel approach to improve the robustness of ML-based network traffic classification models by integrating derived variables (DVars) into training. Unlike adversarial training, our approach focuses on enhancing training using DVars, introducing randomness into the input data. DVars are generated from the baseline dataset and significantly improve the resilience of the model to AEs. To evaluate the effectiveness of DVars, experiments were conducted using the CSE-CIC-IDS2018 dataset and three state-of-the-art ML-based models: decision tree (DT), random forest (RF), and k-neighbors (KNN). The results show that DVars can improve the accuracy of KNN under attack from 0.45% to 0.84% for low-intensity attacks and from 0.32% to 0.66% for high-intensity attacks. Furthermore, both DT and RF achieve a significant increase in accuracy when subjected to attack of different intensity. Moreover, DVars are computationally efficient, scalable, and do not require access to AEs.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] Adversarial Network Traffic: Towards Evaluating the Robustness of Deep-Learning-Based Network Traffic Classification
    Sadeghzadeh, Amir Mahdi
    Shiravi, Saeed
    Jalili, Rasool
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2021, 18 (02): : 1962 - 1976
  • [32] Proactive Network Traffic Prediction using Generative Adversarial Network
    Byun, Gyurin
    Vo, Van-Vi
    Raza, Syed M.
    Le, Duc-Tai
    Yang, Huigyu
    Choo, Hyunseung
    [J]. 38TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN 2024, 2024, : 156 - 159
  • [33] MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
    Jia, Jinyuan
    Qu, Wenjie
    Gong, Neil Zhenqiang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [34] Defending Against Adversarial Iris Examples Using Wavelet Decomposition
    Soleymani, Sobhan
    Dabouei, Ali
    Dawson, Jeremy
    Nasrabadi, Nasser M.
    [J]. 2019 IEEE 10TH INTERNATIONAL CONFERENCE ON BIOMETRICS THEORY, APPLICATIONS AND SYSTEMS (BTAS), 2019,
  • [35] Using Local Convolutional Units to Defend Against Adversarial Examples
    Kocian, Matej
    Pilat, Martin
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [36] Defending against adversarial examples using perceptual image hashing
    Wu, Ke
    Wang, Zichi
    Zhang, Xinpeng
    Tang, Zhenjun
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (02)
  • [37] Evaluating Resilience of Encrypted Traffic Classification against Adversarial Evasion Attacks
    Maarouf, Ramy
    Sattar, Danish
    Matrawy, Ashraf
    [J]. 26TH IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS (IEEE ISCC 2021), 2021,
  • [38] Effects of and Defenses Against Adversarial Attacks on a Traffic Light Classification CNN
    Wan, Morris
    Han, Meng
    Li, Lin
    Li, Zhigang
    He, Selena
    [J]. ACMSE 2020: PROCEEDINGS OF THE 2020 ACM SOUTHEAST CONFERENCE, 2020, : 94 - 99
  • [39] Defending Against Deep Learning-Based Traffic Fingerprinting Attacks with Adversarial Examples
    Hayden, Blake
    Walsh, Timothy
    Barton, Armon
    [J]. ACM Transactions on Privacy and Security, 2024, 28 (01)
  • [40] Towards improving the robustness of sequential labeling models against typographical adversarial examples using triplet loss
    Udomcharoenchaikit, Can
    Boonkwan, Prachya
    Vateekul, Peerapon
    [J]. NATURAL LANGUAGE ENGINEERING, 2023, 29 (02) : 287 - 315