A Spam Filtering Method Based on Multi-Modal Fusion

被引:20
|
作者
Yang, Hong [1 ,2 ]
Liu, Qihe [1 ,2 ]
Zhou, Shijie [1 ,2 ]
Luo, Yang [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Enginerring, Chengdu 610054, Sichuan, Peoples R China
[2] 4,Sect 2,Jianshe North Rd, Chengdu 610054, Sichuan, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2019年 / 9卷 / 06期
关键词
spam filtering system; multi-modal; MMA-MF; fusion model; LSTM; CNN; CLASSIFICATION;
D O I
10.3390/app9061152
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In recent years, the single-modal spam filtering systems have had a high detection rate for image spamming or text spamming. To avoid detection based on the single-modal spam filtering systems, spammers inject junk information into the multi-modality part of an email and combine them to reduce the recognition rate of the single-modal spam filtering systems, thereby implementing the purpose of evading detection. In view of this situation, a new model called multi-modal architecture based on model fusion (MMA-MF) is proposed, which use a multi-modal fusion method to ensure it could effectively filter spam whether it is hidden in the text or in the image. The model fuses a Convolutional Neural Network (CNN) model and a Long Short-Term Memory (LSTM) model to filter spam. Using the LSTM model and the CNN model to process the text and image parts of an email separately to obtain two classification probability values, then the two classification probability values are incorporated into a fusion model to identify whether the email is spam or not. For the hyperparameters of the MMA-MF model, we use a grid search optimization method to get the most suitable hyperparameters for it, and employ a k-fold cross-validation method to evaluate the performance of this model. Our experimental results show that this model is superior to the traditional spam filtering systems and can achieve accuracies in the range of 92.64-98.48%.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Gradient structural similarity based gradient filtering for multi-modal image fusion
    Fu, Zhizhong
    Zhao, Yufei
    Xu, Yuwei
    Xu, Lijuan
    Xu, Jin
    [J]. INFORMATION FUSION, 2020, 53 : 251 - 268
  • [2] Visual Sorting Method Based on Multi-Modal Information Fusion
    Han, Song
    Liu, Xiaoping
    Wang, Gang
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (06):
  • [3] Evaluation Method of Teaching Styles Based on Multi-modal Fusion
    Tang, Wen
    Wang, Chongwen
    Zhang, Yi
    [J]. 2021 THE 7TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION PROCESSING, ICCIP 2021, 2021, : 9 - 15
  • [4] Multi-modal Fusion
    Liu, Huaping
    Hussain, Amir
    Wang, Shuliang
    [J]. INFORMATION SCIENCES, 2018, 432 : 462 - 462
  • [5] AF: An Association-Based Fusion Method for Multi-Modal Classification
    Liang, Xinyan
    Qian, Yuhua
    Guo, Qian
    Cheng, Honghong
    Liang, Jiye
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 9236 - 9254
  • [6] Multi-modal fusion method for human action recognition based on IALC
    Zhang, Yinhuan
    Xiao, Qinkun
    Liu, Xing
    Wei, Yongquan
    Chu, Chaoqin
    Xue, Jingyun
    [J]. IET IMAGE PROCESSING, 2023, 17 (02) : 388 - 400
  • [7] A Novel Chinese Character Recognition Method Based on Multi-Modal Fusion
    Liu, Jin
    Lyu, Shiqi
    Yu, Chao
    Yang, Yihe
    Luan, Cuiju
    [J]. FUZZY SYSTEMS AND DATA MINING V (FSDM 2019), 2019, 320 : 487 - 492
  • [8] Multi-modal brain image fusion based on multi-level edge-preserving filtering
    Tan, Wei
    Thiton, William
    Xiang, Pei
    Zhou, Huixin
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 64
  • [9] Multi-modal Fusion Brain Tumor Detection Method Based on Deep Learning
    Yao Hong-ge
    Shen Xin-xia
    Li Yu
    Yu Jun
    Lei Song-ze
    [J]. ACTA PHOTONICA SINICA, 2019, 48 (07)
  • [10] Test method of laser paint removal based on multi-modal feature fusion
    Huang Hai-peng
    Hao Ben-tian
    Ye De-jun
    Gao Hao
    Li Liang
    [J]. JOURNAL OF CENTRAL SOUTH UNIVERSITY, 2022, 29 (10) : 3385 - 3398