A Spam Filtering Method Based on Multi-Modal Fusion

被引:20
|
作者
Yang, Hong [1 ,2 ]
Liu, Qihe [1 ,2 ]
Zhou, Shijie [1 ,2 ]
Luo, Yang [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Enginerring, Chengdu 610054, Sichuan, Peoples R China
[2] 4,Sect 2,Jianshe North Rd, Chengdu 610054, Sichuan, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2019年 / 9卷 / 06期
关键词
spam filtering system; multi-modal; MMA-MF; fusion model; LSTM; CNN; CLASSIFICATION;
D O I
10.3390/app9061152
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In recent years, the single-modal spam filtering systems have had a high detection rate for image spamming or text spamming. To avoid detection based on the single-modal spam filtering systems, spammers inject junk information into the multi-modality part of an email and combine them to reduce the recognition rate of the single-modal spam filtering systems, thereby implementing the purpose of evading detection. In view of this situation, a new model called multi-modal architecture based on model fusion (MMA-MF) is proposed, which use a multi-modal fusion method to ensure it could effectively filter spam whether it is hidden in the text or in the image. The model fuses a Convolutional Neural Network (CNN) model and a Long Short-Term Memory (LSTM) model to filter spam. Using the LSTM model and the CNN model to process the text and image parts of an email separately to obtain two classification probability values, then the two classification probability values are incorporated into a fusion model to identify whether the email is spam or not. For the hyperparameters of the MMA-MF model, we use a grid search optimization method to get the most suitable hyperparameters for it, and employ a k-fold cross-validation method to evaluate the performance of this model. Our experimental results show that this model is superior to the traditional spam filtering systems and can achieve accuracies in the range of 92.64-98.48%.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Soft multi-modal data fusion
    Coppock, S
    Mazack, L
    PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1 AND 2, 2003, : 636 - 641
  • [32] Multi-modal fusion for video understanding
    Hoogs, A
    Mundy, J
    Cross, G
    30TH APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, PROCEEDINGS: ANALYSIS AND UNDERSTANDING OF TIME VARYING IMAGERY, 2001, : 103 - 108
  • [33] Multi-modal data fusion: A description
    Coppock, S
    Mazlack, LJ
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2004, 3214 : 1136 - 1142
  • [34] WiCapose: Multi-modal fusion based transparent authentication in mobileenvironments
    Chang, Zhuo
    Meng, Yan
    Liu, Wenyuan
    Zhu, Haojin
    Wang, Lin
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2022, 66
  • [35] Adherent Peanut Image Segmentation Based on Multi-Modal Fusion
    Wang, Yujing
    Ye, Fang
    Zeng, Jiusun
    Cai, Jinhui
    Huang, Wangsen
    SENSORS, 2024, 24 (14)
  • [36] ART-based fusion of multi-modal perception for robots
    Berghoefer, Elmar
    Schulze, Denis
    Rauch, Christian
    Tscherepanow, Marko
    Koehler, Tim
    Wachsmuth, Sven
    NEUROCOMPUTING, 2013, 107 : 11 - 22
  • [37] News video classification based on multi-modal information fusion
    Lie, WN
    Su, CK
    2005 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), VOLS 1-5, 2005, : 1021 - 1024
  • [38] Fabric image retrieval based on multi-modal feature fusion
    Ning Zhang
    Yixin Liu
    Zhongjian Li
    Jun Xiang
    Ruru Pan
    Signal, Image and Video Processing, 2024, 18 : 2207 - 2217
  • [39] Disease Classification Model Based on Multi-Modal Feature Fusion
    Wan, Zhengyu
    Shao, Xinhui
    IEEE ACCESS, 2023, 11 : 27536 - 27545
  • [40] Multi-Modal Military Event Extraction Based on Knowledge Fusion
    Xiang, Yuyuan
    Jia, Yangli
    Zhang, Xiangliang
    Zhang, Zhenling
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 77 (01): : 97 - 114