Multi-modal Robustness Fake News Detection with Cross-Modal and Propagation Network Contrastive Learning

被引:0
|
作者
Chen, Han [1 ,2 ]
Wang, Hairong [1 ]
Liu, Zhipeng [1 ]
Li, Yuhua [1 ]
Hu, Yifan [3 ]
Zhang, Yujing [1 ]
Shu, Kai [4 ]
Li, Ruixuan [1 ]
Yu, Philip S. [5 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
[2] Huazhong Univ Sci & Technol, Inst Artificial Intelligence, Wuhan 430074, Peoples R China
[3] Univ Sydney, Sch Comp Sci, Sydney 2006, Australia
[4] Emory Univ, Dept Comp Sci, Atlanta, GA 30322 USA
[5] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
基金
中国国家自然科学基金;
关键词
Contrastive learning; Multi-modal; Fake news detection; Limited labeled data; Mismatched pairs scenario;
D O I
10.1016/j.knosys.2024.112800
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Social media has transformed the landscape of news dissemination, characterized by its rapid, extensive, and diverse content, coupled with the challenge of verifying authenticity. The proliferation of multimodal news on these platforms has presented novel obstacles in detecting fake news. Existing approaches typically focus on single modalities, such as text or images, or combine text and image content or with propagation network data. However, the potential for more robust fake news detection lies in considering three modalities simultaneously. In addition, the heavy reliance on labeled data in current detection methods proves time-consuming and costly. To address these challenges, we propose a novel approach, M ulti-modal Robustness F ake News Detection with Cross-Modal and Propagation Network C ontrastive L earning (MFCL). This method integrates intrinsic features from text, images, and propagation networks, capturing essential intermodal relationships for accurate fake news detection. Contrastive learning is employed to learn intrinsic features while mitigating the issue of limited labeled data. Furthermore, we introduce image-text matching (ITM) data augmentation to ensure consistent image-text representations and employ adaptive propagation (AP) network data augmentation for high-order feature learning. We utilize contextual transformers to bolster the effectiveness of fake news detection, unveiling crucial intermodal connections in the process. Experimental results on real-world datasets demonstrate that MFCL outperforms existing methods, maintaining high accuracy and robustness even with limited labeled data and mismatched pairs. Our code is available at https://github.com/HanChen-HUST/KBSMFCL.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] CROSS-MODAL KNOWLEDGE DISTILLATION IN MULTI-MODAL FAKE NEWS DETECTION
    Wei, Zimian
    Pan, Hengyue
    Qiao, Linbo
    Niu, Xin
    Dong, Peijie
    Li, Dongsheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4733 - 4737
  • [2] Cross-modal Contrastive Learning for Multimodal Fake News Detection
    Wang, Longzheng
    Zhang, Chuang
    Xu, Hongbo
    Xu, Yongxiu
    Xu, Xiaohan
    Wang, Siqi
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5696 - 5704
  • [3] Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-Modal Fake News Detection
    Chen, Jinyin
    Jia, Chengyu
    Zheng, Haibin
    Chen, Ruoxi
    Fu, Chenbo
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (06): : 3144 - 3158
  • [4] CCGN: consistency contrastive-learning graph network for multi-modal fake news detection
    Cui, Shaodong
    Duan, Kaibo
    Ma, Wen
    Shinnou, Hiroyuki
    MULTIMEDIA SYSTEMS, 2025, 31 (02)
  • [5] CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations
    Zolfaghari, Mohammadreza
    Zhu, Yi
    Gehler, Peter
    Brox, Thomas
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1430 - 1439
  • [6] Cross-modal Ambiguity Learning for Multimodal Fake News Detection
    Chen, Yixuan
    Li, Dongsheng
    Zhang, Peng
    Sui, Jie
    Lv, Qin
    Lu, Tun
    Shang, Li
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 2897 - 2905
  • [7] Gated Multi-modal Fusion with Cross-modal Contrastive Learning for Video Question Answering
    Lyu, Chenyang
    Li, Wenxi
    Ji, Tianbo
    Zhou, Liting
    Gurrin, Cathal
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VII, 2023, 14260 : 427 - 438
  • [8] Multi-modal Chinese Fake News Detection
    Huang, Wenxi
    Zhao, Zhangyi
    Chen, Xiaojun
    Li, Mark Junjie
    Zhang, Qin
    Fournier-Viger, Philippe
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 109 - 117
  • [9] Multi-modal transformer for fake news detection
    Yang, Pingping
    Ma, Jiachen
    Liu, Yong
    Liu, Meng
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (08) : 14699 - 14717
  • [10] Multi-Level Multi-Modal Cross-Attention Network for Fake News Detection
    Ying, Long
    Yu, Hui
    Wang, Jinguang
    Ji, Yongze
    Qian, Shengsheng
    IEEE ACCESS, 2021, 9 : 132363 - 132373