MM-ConvBERT-LMS: Detecting Malicious Web Pages via Multi-Modal Learning and Pre-Trained Model

被引:1
|
作者
Tong, Xin [1 ]
Jin, Bo [1 ,2 ]
Wang, Jingya [1 ]
Yang, Ying [2 ]
Suo, Qiwei [1 ]
Wu, Yong [3 ]
机构
[1] Peoples Publ Secur Univ China, Sch Informat & Cyber Secur, Beijing 100038, Peoples R China
[2] Minist Publ Secur, Res Inst 3, Shanghai 200031, Peoples R China
[3] Natl Police Univ Criminal Justice, Dept Informat Management, Baoding 071000, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 05期
关键词
malicious web pages; multi-modal learning; pre-trained model; URL; !text type='HTML']HTML[!/text;
D O I
10.3390/app13053327
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In recent years, the number of malicious web pages has increased dramatically, posing a great challenge to network security. While current machine learning-based detection methods have emerged as a promising alternative to traditional detection techniques. However, these methods are commonly based on single-modal features or simple stacking of classifiers built on various features. As a result, these techniques are not capable of effectively fusing features from different modalities, ultimately limiting the detection effectiveness. To address this limitation, we propose a malicious web page detection method based on multi-modal learning and pre-trained models. First, in the input stage, the raw URL and HTML tag sequences of web pages are used as input features. To help the subsequent model learn the relationship between the two modalities and avoid information confusion, modal-type encoding, and positional encoding are introduced. Next, a single-stream neural network based on the ConvBERT pre-trained model is used as the backbone classifier, and it learns the representation of multi-modal features through fine-tuning. For the output part of the model, a linear layer based on large margin softmax is applied to the decision-making. This activation function effectively increases the classification boundary and improves the robustness. In addition, a coarse-grained modal matching loss is added to the model optimization objective to assist the models in learning the cross-modal association features. Experimental results on synthetic datasets show that our proposed method outperforms traditional single-modal detection methods in general, and has advantages over baseline models in terms of accuracy and reliability.
引用
收藏
页数:23
相关论文
共 39 条
  • [1] Probing Multi-modal Machine Translation with Pre-trained Language Model
    Kong, Yawei
    Fan, Kai
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3689 - 3699
  • [2] Cyberbullying detection on multi-modal data using pre-trained deep learning architectures
    Pericherla, Subbaraju
    Ilavarasan, E.
    [J]. INGENIERIA SOLIDARIA, 2021, 17 (03):
  • [3] PMMN: Pre-trained multi-Modal network for scene text recognition
    Zhang, Yu
    Fu, Zilong
    Huang, Fuyu
    Liu, Yizhi
    [J]. PATTERN RECOGNITION LETTERS, 2021, 151 : 103 - 111
  • [4] Fast multi-modal reuse: Co-occurrence pre-trained deep learning models
    Iyer, Vasanth
    Aved, Alexander
    Howlett, Todd B.
    Carlo, Jeffrey T.
    Mehmood, Asif
    Pissinou, Niki
    Iyengar, S.S.
    [J]. Proceedings of SPIE - The International Society for Optical Engineering, 2019, 10996
  • [5] Fast Multi-Modal Reuse: Co-Occurrence Pre-Trained Deep Learning Models
    Iyer, Vasanth
    Aved, Alexander
    Howlett, Todd B.
    Carlo, Jeffrey T.
    Mehmood, Asif
    Pissinou, Niki
    Iyengar, S. S.
    [J]. REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2019, 2019, 10996
  • [6] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
    Xiao Wang
    Guangyao Chen
    Guangwu Qian
    Pengcheng Gao
    Xiao-Yong Wei
    Yaowei Wang
    Yonghong Tian
    Wen Gao
    [J]. Machine Intelligence Research, 2023, 20 : 447 - 482
  • [7] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
    Wang, Xiao
    Chen, Guangyao
    Qian, Guangwu
    Gao, Pengcheng
    Wei, Xiao-Yong
    Wang, Yaowei
    Tian, Yonghong
    Gao, Wen
    [J]. MACHINE INTELLIGENCE RESEARCH, 2023, 20 (04) : 447 - 482
  • [8] MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation
    Bellagente, Marco
    Brack, Manuel
    Teufel, Hannah
    Friedrich, Felix
    Deiseroth, Bjoern
    Eichenberg, Constantin
    Dai, Andrew
    Baldock, Robert J. N.
    Nanda, Souradeep
    Oostermeijer, Koen
    Cruz-Salinas, Andres Felipe
    Schramowski, Patrick
    Kersting, Kristian
    Weinbach, Samuel
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] Multi-modal Segmentation with Missing MR Sequences Using Pre-trained Fusion Networks
    van Garderen, Karin
    Smits, Marion
    Klein, Stefan
    [J]. DOMAIN ADAPTATION AND REPRESENTATION TRANSFER AND MEDICAL IMAGE LEARNING WITH LESS LABELS AND IMPERFECT DATA, DART 2019, MIL3ID 2019, 2019, 11795 : 165 - 172
  • [10] Modal Consistency based Pre-Trained Multi-Model Reuse
    Yang, Yang
    Zhan, De-Chuan
    Guo, Xiang-Yu
    Jiang, Yuan
    [J]. PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3287 - 3293