EAPT: An encrypted traffic classification model via adversarial pre-trained transformers

被引:0
|
作者
Zhan, Mingming [1 ]
Yang, Jin [1 ,2 ]
Jia, Dongqing [1 ]
Fu, Geyuan [1 ]
机构
[1] School of Cyber Science and Engineering, Sichuan University, Sichuan, Chengdu,610207, China
[2] Key Laboratory of Data Protection and Intelligent Management of the Ministry of Education, Sichuan, Chengdu,610207, China
关键词
Distribution transformers;
D O I
10.1016/j.comnet.2024.110973
中图分类号
学科分类号
摘要
Encrypted traffic classification plays a critical role in network traffic management and optimization, as it helps identify and differentiate between various types of traffic, thereby enhancing the quality and efficiency of network services. However, with the continuous evolution of traffic encryption and network applications, a large and diverse volume of encrypted traffic has emerged, presenting challenges for traditional feature extraction-based methods in identifying encrypted traffic effectively. This paper introduces an encrypted traffic classification model via adversarial pre-trained transformers-EAPT. The model utilizes the SentencePiece to tokenize encrypted traffic data, effectively addressing the issue of coarse tokenization granularity, thereby ensuring that the tokenization results more accurately reflect the characteristics of the encrypted traffic. During the pre-training phase, the EAPT employs a disentangled attention mechanism and incorporates a pre-training task similar to generative adversarial networks called Replaced BURST Detection. This approach not only enhances the model's ability to understand contextual information but also accelerates the pre-training process. Additionally, this method minimizes model parameters, thus improving the model's generalization capability. Experimental results show that EAPT can efficiently learn traffic features from small-scale unlabeled datasets and demonstrate excellent performance across multiple datasets with a relatively small number of model parameters. © 2024 Elsevier B.V.
引用
收藏
相关论文
共 50 条
  • [22] Spanish Pre-Trained CaTrBETO Model for Sentiment Classification in Twitter
    Pijal, Washington
    Armijos, Arianna
    Llumiquinga, Jose
    Lalvay, Sebastian
    Allauca, Steven
    Cuenca, Erick
    [J]. 2022 THIRD INTERNATIONAL CONFERENCE ON INFORMATION SYSTEMS AND SOFTWARE TECHNOLOGIES, ICI2ST, 2022, : 93 - 98
  • [23] Comparable Study of Pre-trained Model on Alzheimer Disease Classification
    Odusami, Modupe
    Maskeliunas, Rytis
    Damasevicius, Robertas
    Misra, Sanjay
    [J]. COMPUTATIONAL SCIENCE AND ITS APPLICATIONS, ICCSA 2021, PT V, 2021, 12953 : 63 - 74
  • [24] Diabetic Retinopathy Classification with pre-trained Image Enhancement Model
    Mudaser, Wahidullah
    Padungweang, Praisan
    Mongkolnam, Pornchai
    Lavangnananda, Patcharaporn
    [J]. 2021 IEEE 12TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2021, : 629 - 632
  • [25] Zero-shot Mathematical Problem Solving via Generative Pre-trained Transformers
    Galatolo, Federico A.
    Cimino, Mario G. C. A.
    Vaglini, Gigliola
    [J]. ICEIS: PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS - VOL 1, 2022, : 479 - 483
  • [26] Grounding Dialogue Systems via Knowledge Graph Aware Decoding with Pre-trained Transformers
    Chaudhuri, Debanjan
    Rony, Md Rashad Al Hasan
    Lehmann, Jens
    [J]. SEMANTIC WEB, ESWC 2021, 2021, 12731 : 323 - 339
  • [27] VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models
    Yin, Ziyi
    Ye, Muchao
    Zhang, Tianrong
    Wang, Jiaqi
    Liu, Han
    Chen, Jinghui
    Wang, Ting
    Ma, Fenglong
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 6755 - 6763
  • [28] FlowBERT: An Encrypted Traffic Classification Model Based on Transformers Using Flow Sequence
    Pan, Quanbo
    Yu, Yang
    Yan, Hanbing
    Wang, Maoli
    Qi, Bingzhi
    [J]. 2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 133 - 140
  • [29] TWilBert: Pre-trained deep bidirectional transformers for Spanish Twitter
    Gonzalez, Jose Angel
    Hurtado, Lluis-F.
    Pla, Ferran
    [J]. NEUROCOMPUTING, 2021, 426 : 58 - 69
  • [30] Introducing pre-trained transformers for high entropy alloy informatics
    Kamnis, Spyros
    [J]. MATERIALS LETTERS, 2024, 358