A Spam Filtering Method Based on Multi-Modal Fusion

被引:20
|
作者
Yang, Hong [1 ,2 ]
Liu, Qihe [1 ,2 ]
Zhou, Shijie [1 ,2 ]
Luo, Yang [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Enginerring, Chengdu 610054, Sichuan, Peoples R China
[2] 4,Sect 2,Jianshe North Rd, Chengdu 610054, Sichuan, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2019年 / 9卷 / 06期
关键词
spam filtering system; multi-modal; MMA-MF; fusion model; LSTM; CNN; CLASSIFICATION;
D O I
10.3390/app9061152
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In recent years, the single-modal spam filtering systems have had a high detection rate for image spamming or text spamming. To avoid detection based on the single-modal spam filtering systems, spammers inject junk information into the multi-modality part of an email and combine them to reduce the recognition rate of the single-modal spam filtering systems, thereby implementing the purpose of evading detection. In view of this situation, a new model called multi-modal architecture based on model fusion (MMA-MF) is proposed, which use a multi-modal fusion method to ensure it could effectively filter spam whether it is hidden in the text or in the image. The model fuses a Convolutional Neural Network (CNN) model and a Long Short-Term Memory (LSTM) model to filter spam. Using the LSTM model and the CNN model to process the text and image parts of an email separately to obtain two classification probability values, then the two classification probability values are incorporated into a fusion model to identify whether the email is spam or not. For the hyperparameters of the MMA-MF model, we use a grid search optimization method to get the most suitable hyperparameters for it, and employ a k-fold cross-validation method to evaluate the performance of this model. Our experimental results show that this model is superior to the traditional spam filtering systems and can achieve accuracies in the range of 92.64-98.48%.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Multi-modal fusion for video understanding
    Hoogs, A
    Mundy, J
    Cross, G
    [J]. 30TH APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, PROCEEDINGS: ANALYSIS AND UNDERSTANDING OF TIME VARYING IMAGERY, 2001, : 103 - 108
  • [32] WiCapose: Multi-modal fusion based transparent authentication in mobileenvironments
    Chang, Zhuo
    Meng, Yan
    Liu, Wenyuan
    Zhu, Haojin
    Wang, Lin
    [J]. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2022, 66
  • [33] Adherent Peanut Image Segmentation Based on Multi-Modal Fusion
    Wang, Yujing
    Ye, Fang
    Zeng, Jiusun
    Cai, Jinhui
    Huang, Wangsen
    [J]. SENSORS, 2024, 24 (14)
  • [34] Attention-based multi-modal fusion sarcasm detection
    Liu, Jing
    Tian, Shengwei
    Yu, Long
    Long, Jun
    Zhou, Tiejun
    Wang, Bo
    [J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 44 (02) : 2097 - 2108
  • [35] Intensity gradient based registration and fusion of multi-modal images
    Haber, E.
    Modersitzki, J.
    [J]. METHODS OF INFORMATION IN MEDICINE, 2007, 46 (03) : 292 - 299
  • [36] Multi-Modal Military Event Extraction Based on Knowledge Fusion
    Xiang, Yuyuan
    Jia, Yangli
    Zhang, Xiangliang
    Zhang, Zhenling
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 77 (01): : 97 - 114
  • [37] Disease Classification Model Based on Multi-Modal Feature Fusion
    Wan, Zhengyu
    Shao, Xinhui
    [J]. IEEE ACCESS, 2023, 11 : 27536 - 27545
  • [38] Multi-modal Image Fusion Based on ROI and Laplacian Pyramid
    Gao, Xiong
    Zhang, Hong
    Chen, Hao
    Li, Jiafeng
    [J]. SIXTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2014), 2015, 9443
  • [39] Generative-Based Fusion Mechanism for Multi-Modal Tracking
    Tang, Zhangyong
    Xu, Tianyang
    Wu, Xiaojun
    Zhu, Xue-Feng
    Kittler, Josef
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5189 - 5197
  • [40] ART-based fusion of multi-modal perception for robots
    Berghoefer, Elmar
    Schulze, Denis
    Rauch, Christian
    Tscherepanow, Marko
    Koehler, Tim
    Wachsmuth, Sven
    [J]. NEUROCOMPUTING, 2013, 107 : 11 - 22