Multi-modal Robustness Fake News Detection with Cross-Modal and Propagation Network Contrastive Learning

被引:0
|
作者
Chen, Han [1 ,2 ]
Wang, Hairong [1 ]
Liu, Zhipeng [1 ]
Li, Yuhua [1 ]
Hu, Yifan [3 ]
Zhang, Yujing [1 ]
Shu, Kai [4 ]
Li, Ruixuan [1 ]
Yu, Philip S. [5 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
[2] Huazhong Univ Sci & Technol, Inst Artificial Intelligence, Wuhan 430074, Peoples R China
[3] Univ Sydney, Sch Comp Sci, Sydney 2006, Australia
[4] Emory Univ, Dept Comp Sci, Atlanta, GA 30322 USA
[5] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
基金
中国国家自然科学基金;
关键词
Contrastive learning; Multi-modal; Fake news detection; Limited labeled data; Mismatched pairs scenario;
D O I
10.1016/j.knosys.2024.112800
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Social media has transformed the landscape of news dissemination, characterized by its rapid, extensive, and diverse content, coupled with the challenge of verifying authenticity. The proliferation of multimodal news on these platforms has presented novel obstacles in detecting fake news. Existing approaches typically focus on single modalities, such as text or images, or combine text and image content or with propagation network data. However, the potential for more robust fake news detection lies in considering three modalities simultaneously. In addition, the heavy reliance on labeled data in current detection methods proves time-consuming and costly. To address these challenges, we propose a novel approach, M ulti-modal Robustness F ake News Detection with Cross-Modal and Propagation Network C ontrastive L earning (MFCL). This method integrates intrinsic features from text, images, and propagation networks, capturing essential intermodal relationships for accurate fake news detection. Contrastive learning is employed to learn intrinsic features while mitigating the issue of limited labeled data. Furthermore, we introduce image-text matching (ITM) data augmentation to ensure consistent image-text representations and employ adaptive propagation (AP) network data augmentation for high-order feature learning. We utilize contextual transformers to bolster the effectiveness of fake news detection, unveiling crucial intermodal connections in the process. Experimental results on real-world datasets demonstrate that MFCL outperforms existing methods, maintaining high accuracy and robustness even with limited labeled data and mismatched pairs. Our code is available at https://github.com/HanChen-HUST/KBSMFCL.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Entity-Oriented Multi-Modal Alignment and Fusion Network for Fake News Detection
    Li, Peiguang
    Sun, Xian
    Yu, Hongfeng
    Tian, Yu
    Yao, Fanglong
    Xu, Guangluan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 24 : 3455 - 3468
  • [42] Fake News Detection Based on BERT Multi-domain and Multi-modal Fusion Network
    Yu, Kai
    Jiao, Shiming
    Ma, Zhilong
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2025, 252
  • [43] Positive Unlabeled Fake News Detection via Multi-Modal Masked Transformer Network
    Wang, Jinguang
    Qian, Shengsheng
    Hu, Jun
    Hong, Richang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 234 - 244
  • [44] Cross-modal attention for multi-modal image registration
    Song, Xinrui
    Chao, Hanqing
    Xu, Xuanang
    Guo, Hengtao
    Xu, Sheng
    Turkbey, Baris
    Wood, Bradford J.
    Sanford, Thomas
    Wang, Ge
    Yan, Pingkun
    MEDICAL IMAGE ANALYSIS, 2022, 82
  • [45] Multi-modal and cross-modal for lecture videos retrieval
    Nhu Van Nguyen
    Coustaty, Mickal
    Ogier, Jean-Marc
    2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 2667 - 2672
  • [46] BCMF: A bidirectional cross-modal fusion model for fake news detection
    Yu, Chuanming
    Ma, Yinxue
    An, Lu
    Li, Gang
    INFORMATION PROCESSING & MANAGEMENT, 2022, 59 (05)
  • [47] MAFE: Multi-modal Alignment via Mutual Information Maximum Perspective in Multi-modal Fake News Detection
    Qin, Haimei
    Jing, Yaqi
    Duan, Yunqiang
    Jiang, Lei
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 1515 - 1521
  • [48] Unsupervised Multi-modal Hashing for Cross-Modal Retrieval
    Jun Yu
    Xiao-Jun Wu
    Donglin Zhang
    Cognitive Computation, 2022, 14 : 1159 - 1171
  • [49] Unsupervised Multi-modal Hashing for Cross-Modal Retrieval
    Yu, Jun
    Wu, Xiao-Jun
    Zhang, Donglin
    COGNITIVE COMPUTATION, 2022, 14 (03) : 1159 - 1171
  • [50] Cross-Modal Retrieval Augmentation for Multi-Modal Classification
    Gur, Shir
    Neverova, Natalia
    Stauffer, Chris
    Lim, Ser-Nam
    Kiela, Douwe
    Reiter, Austin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 111 - 123