ConOffense: Multi-modal multitask Contrastive learning for offensive content identification

被引:0
|
作者
Shome, Debaditya [1 ]
Kar, T. [1 ]
机构
[1] KIIT Univ, Sch Elect Engn, Bhubaneswar, Odisha, India
关键词
Multimodal learning; Contrastive learning; Representation learning; Social media; Offensive content identification;
D O I
10.1109/BigData52589.2021.9671427
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hateful or offensive content has been increasingly common on social media platforms in recent years, and the problem is now widespread. There is a pressing need for effective automatic solutions for detecting such content, especially due to the gigantic size of social media data. Although significant progress has been made in the automated identification of offensive content, most of the focus has been on only using textual information. It can be easily noticed that with the rise in visual information shared on these platforms, it is quite common to have hateful content on images rather than in the associated text. Due to this, present day unimodal text-based methods won't be able to cope up with the multimodal hateful content. In this paper, we propose a novel multimodal neural network powered by contrastive learning for identifying offensive posts on social media utilizing both visual and textual information. We design the text and visual encoders with a lightweight architecture to make the solution efficient for real world use. Evaluation on the MMHS150K dataset shows state-of-the-art performance of 82.6 percent test accuracy, making an improvement of approximately +14.1 percent accuracy over the previous best performing benchmark model on the dataset.
引用
收藏
页码:4524 / 4529
页数:6
相关论文
共 50 条
  • [1] Multi-modal Contrastive Learning for Healthcare Data Analytics
    Li, Rui
    Gao, Jing
    2022 IEEE 10TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2022), 2022, : 120 - 127
  • [2] Turbo your multi-modal classification with contrastive learning
    Zhang, Zhiyu
    Liu, Da
    Liu, Shengqiang
    Wang, Anna
    Gao, Jie
    Li, Yali
    INTERSPEECH 2023, 2023, : 1848 - 1852
  • [3] Contrastive Multi-Modal Knowledge Graph Representation Learning
    Fang, Quan
    Zhang, Xiaowei
    Hu, Jun
    Wu, Xian
    Xu, Changsheng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (09) : 8983 - 8996
  • [4] Deep contrastive representation learning for multi-modal clustering
    Lu, Yang
    Li, Qin
    Zhang, Xiangdong
    Gao, Quanxue
    NEUROCOMPUTING, 2024, 581
  • [5] Improving Code Search with Multi-Modal Momentum Contrastive Learning
    Shi, Zejian
    Xiong, Yun
    Zhang, Yao
    Jiang, Zhijie
    Zhao, Jinjing
    Wang, Lei
    Li, Shanshan
    2023 IEEE/ACM 31ST INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION, ICPC, 2023, : 280 - 291
  • [6] Improving Medical Multi-modal Contrastive Learning with Expert Annotations
    Kumar, Yogesh
    Marttinen, Pekka
    COMPUTER VISION - ECCV 2024, PT XX, 2025, 15078 : 468 - 486
  • [7] CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations
    Zolfaghari, Mohammadreza
    Zhu, Yi
    Gehler, Peter
    Brox, Thomas
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1430 - 1439
  • [8] Graph Embedding Contrastive Multi-Modal Representation Learning for Clustering
    Xia, Wei
    Wang, Tianxiu
    Gao, Quanxue
    Yang, Ming
    Gao, Xinbo
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 1170 - 1183
  • [9] CrossMoCo: Multi-modal Momentum Contrastive Learning for Point Cloud
    Paul, Sneha
    Patterson, Zachary
    Bouguila, Nizar
    2023 20TH CONFERENCE ON ROBOTS AND VISION, CRV, 2023, : 273 - 280
  • [10] Multi-modal hypergraph contrastive learning for medical image segmentation
    Jing, Weipeng
    Wang, Junze
    Di, Donglin
    Li, Dandan
    Song, Yang
    Fan, Lei
    PATTERN RECOGNITION, 2025, 165