ConOffense: Multi-modal multitask Contrastive learning for offensive content identification

被引:0
|
作者
Shome, Debaditya [1 ]
Kar, T. [1 ]
机构
[1] KIIT Univ, Sch Elect Engn, Bhubaneswar, Odisha, India
关键词
Multimodal learning; Contrastive learning; Representation learning; Social media; Offensive content identification;
D O I
10.1109/BigData52589.2021.9671427
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hateful or offensive content has been increasingly common on social media platforms in recent years, and the problem is now widespread. There is a pressing need for effective automatic solutions for detecting such content, especially due to the gigantic size of social media data. Although significant progress has been made in the automated identification of offensive content, most of the focus has been on only using textual information. It can be easily noticed that with the rise in visual information shared on these platforms, it is quite common to have hateful content on images rather than in the associated text. Due to this, present day unimodal text-based methods won't be able to cope up with the multimodal hateful content. In this paper, we propose a novel multimodal neural network powered by contrastive learning for identifying offensive posts on social media utilizing both visual and textual information. We design the text and visual encoders with a lightweight architecture to make the solution efficient for real world use. Evaluation on the MMHS150K dataset shows state-of-the-art performance of 82.6 percent test accuracy, making an improvement of approximately +14.1 percent accuracy over the previous best performing benchmark model on the dataset.
引用
收藏
页码:4524 / 4529
页数:6
相关论文
共 50 条
  • [31] Multi-modal Robustness Fake News Detection with Cross-Modal and Propagation Network Contrastive Learning
    Chen, Han
    Wang, Hairong
    Liu, Zhipeng
    Li, Yuhua
    Hu, Yifan
    Zhang, Yujing
    Shu, Kai
    Li, Ruixuan
    Yu, Philip S.
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [32] Contrastive Adversarial Training for Multi-Modal Machine Translation
    Huang, Xin
    Zhang, Jiajun
    Zong, Chengqing
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2023, 22 (06)
  • [33] Multi-Modal Contrastive Pre-training for Recommendation
    Liu, Zhuang
    Ma, Yunpu
    Schubert, Matthias
    Ouyang, Yuanxin
    Xiong, Zhang
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 99 - 108
  • [34] MMRDF: An improved multitask multi-modal rumor detection framework
    Jiang, Fangting
    Liang, Gang
    Yang, Jin
    Chen, Liangyin
    ELECTRONICS LETTERS, 2023, 59 (10)
  • [35] Improving ncRNA family prediction using multi-modal contrastive learning of sequence and structure
    Xu, Ruiting
    Li, Dan
    Yang, Wen
    Wang, Guohua
    Li, Yang
    BIOINFORMATICS, 2024, 40 (11)
  • [36] PromptLearner-CLIP: Contrastive Multi-Modal Action Representation Learning with Context Optimization
    Zheng, Zhenxing
    An, Gaoyun
    Cao, Shan
    Yang, Zhaoqilin
    Ruan, Qiuqi
    COMPUTER VISION - ACCV 2022, PT IV, 2023, 13844 : 554 - 570
  • [37] A Multi-modal Framework with Contrastive Learning and Sequential Encoding for Enhanced Sleep Stage Detection
    Wang, Zehui
    Zhang, Zhihan
    Wang, Hongtao
    PATTERN RECOGNITION AND COMPUTER VISION, PT V, PRCV 2024, 2025, 15035 : 3 - 17
  • [38] MMCL: Meta-mutual contrastive learning for multi-modal medical image fusion
    Zhang, Ying
    Ma, Chaozhen
    Ding, Hongwei
    Zhu, Yuanjing
    DIGITAL SIGNAL PROCESSING, 2025, 156
  • [39] FMCS: Improving Code Search by Multi-Modal Representation Fusion and Momentum Contrastive Learning
    Liu, Wenjie
    Chen, Gong
    Xie, Xiaoyuan
    2024 IEEE 24TH INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2024, : 632 - 638
  • [40] Nodule-CLIP: Lung nodule classification based on multi-modal contrastive learning
    Sun L.
    Zhang M.
    Lu Y.
    Zhu W.
    Yi Y.
    Yan F.
    Computers in Biology and Medicine, 175