EAPT: An encrypted traffic classification model via adversarial pre-trained transformers

被引:0
|
作者
Zhan, Mingming [1 ]
Yang, Jin [1 ,2 ]
Jia, Dongqing [1 ]
Fu, Geyuan [1 ]
机构
[1] School of Cyber Science and Engineering, Sichuan University, Sichuan, Chengdu,610207, China
[2] Key Laboratory of Data Protection and Intelligent Management of the Ministry of Education, Sichuan, Chengdu,610207, China
关键词
Distribution transformers;
D O I
10.1016/j.comnet.2024.110973
中图分类号
学科分类号
摘要
Encrypted traffic classification plays a critical role in network traffic management and optimization, as it helps identify and differentiate between various types of traffic, thereby enhancing the quality and efficiency of network services. However, with the continuous evolution of traffic encryption and network applications, a large and diverse volume of encrypted traffic has emerged, presenting challenges for traditional feature extraction-based methods in identifying encrypted traffic effectively. This paper introduces an encrypted traffic classification model via adversarial pre-trained transformers-EAPT. The model utilizes the SentencePiece to tokenize encrypted traffic data, effectively addressing the issue of coarse tokenization granularity, thereby ensuring that the tokenization results more accurately reflect the characteristics of the encrypted traffic. During the pre-training phase, the EAPT employs a disentangled attention mechanism and incorporates a pre-training task similar to generative adversarial networks called Replaced BURST Detection. This approach not only enhances the model's ability to understand contextual information but also accelerates the pre-training process. Additionally, this method minimizes model parameters, thus improving the model's generalization capability. Experimental results show that EAPT can efficiently learn traffic features from small-scale unlabeled datasets and demonstrate excellent performance across multiple datasets with a relatively small number of model parameters. © 2024 Elsevier B.V.
引用
收藏
相关论文
共 50 条
  • [1] Are Pre-trained Convolutions Better than Pre-trained Transformers?
    Tay, Yi
    Dehghani, Mostafa
    Gupta, Jai
    Aribandi, Vamsi
    Bahri, Dara
    Qin, Zhen
    Metzler, Donald
    [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4349 - 4359
  • [2] Calibration of Pre-trained Transformers
    Desai, Shrey
    Durrett, Greg
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 295 - 302
  • [3] Pre-trained Adversarial Perturbations
    Ban, Yuanhao
    Dong, Yinpeng
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [4] Software Defect Prediction via Generative Adversarial Networks and Pre-Trained Model
    Song, Wei
    Gan, Lu
    Bao, Tie
    [J]. International Journal of Advanced Computer Science and Applications, 2024, 15 (03) : 1196 - 1209
  • [5] Towards a Transformer-Based Pre-trained Model for IoT Traffic Classification
    Bazaluk, Bruna
    Hamdan, Mosab
    Ghaleb, Mustafa
    Gismalla, Mohammed S. M.
    da Silva, Flavio S. Correa
    Batista, Daniel Macedo
    [J]. PROCEEDINGS OF 2024 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, NOMS 2024, 2024,
  • [6] Pre-trained transformers: an empirical comparison
    Casola, Silvia
    Lauriola, Ivano
    Lavelli, Alberto
    [J]. MACHINE LEARNING WITH APPLICATIONS, 2022, 9
  • [7] Patent classification with pre-trained Bert model
    Kahraman, Selen Yuecesoy
    Durmusoglu, Alptekin
    Dereli, Tuerkay
    [J]. JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY, 2024, 39 (04): : 2485 - 2496
  • [8] Adversarial Data Augmentation for Task-Specific Knowledge Distillation of Pre-trained Transformers
    Zhng, Minjia
    Naresh, Niranjan Uma
    He, Yuxiong
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11685 - 11693
  • [9] Unsupervised Out-of-Domain Detection via Pre-trained Transformers
    Xu, Keyang
    Ren, Tongzheng
    Zhang, Shikun
    Feng, Yihao
    Xiong, Caiming
    [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 1052 - 1061
  • [10] Enhancing medical image classification via federated learning and pre-trained model
    Srinivasu, Parvathaneni Naga
    Lakshmi, G. Jaya
    Narahari, Sujatha Canavoy
    Shafi, Jana
    Choi, Jaeyoung
    Ijaz, Muhammad Fazal
    [J]. EGYPTIAN INFORMATICS JOURNAL, 2024, 27