CrowdGraph: A Crowdsourcing Multi-modal Knowledge Graph Approach to Explainable Fauxtography Detection

被引:1
|
作者
Kou Z. [1 ]
Zhang Y. [1 ]
Zhang D. [2 ]
Wang D. [1 ]
机构
[1] University of Illinois Urbana-Champaign, 614 E. Daniel Street, Champaign, 61820, IL
[2] University of Notre Dame, 384 Fitzpatrick Hall, Notre Dame, Mishawaka, 46556, IN
基金
美国国家科学基金会;
关键词
crowdsourcing; multi modal information; social media;
D O I
10.1145/3555178
中图分类号
学科分类号
摘要
Human-centric fauxtography is a category of multi-modal posts that spread misleading information on online information distribution and sharing platforms such as online social media. The reason of a human-centric post being fauxtography is closely related to its multi-modal content that consists of diversified human and non-human subjects with complex and implicit relationships. In this paper, we focus on an explainable fauxtography detection problem where the goal is to accurately identify and explain why a human-centric social media post is fauxtography (or not). Our problem is motivated by the limitations of current fauxtography detection solutions that focus primarily on the detection task but ignore the important aspect of explaining their results (e.g., why a certain component of the post delivers the misinformation). Two important challenges exist in solving our problem: 1) it is difficult to capture the implicit relations and attributions of different subjects in a fauxtography post given the fact that many of such knowledge is shared between different crowd workers; 2) it is not a trivial task to create a multi-modal knowledge graph from crowd workers to identify and explain human-centric fauxtography posts with multi-modal contents. To address the above challenges, we develop CrowdGraph, a crowdsourcing based multi-modal knowledge graph approach to address the explainable fauxtography detection problem. We evaluate the performance of CrowdGraph by creating a real-world dataset that consists of human-centric fauxtography posts from Twitter and Reddit. The results show that CrowdGraph not only detects the fauxtography posts more accurately than the state-of-The-Arts but also provides well-justified explanations to the detection results with convincing evidence. © 2022 ACM.
引用
下载
收藏
相关论文
共 50 条
  • [1] GRAPH ATTENTION MODEL EMBEDDED WITH MULTI-MODAL KNOWLEDGE FOR DEPRESSION DETECTION
    Zheng, Wenbo
    Yan, Lan
    Gou, Chao
    Wang, Fei-Yue
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [2] Richpedia: A Comprehensive Multi-modal Knowledge Graph
    Wang, Meng
    Qi, Guilin
    Wang, Haofen
    Zheng, Qiushuo
    SEMANTIC TECHNOLOGY, JIST 2019: PROCEEDINGS, 2020, 12032 : 130 - 145
  • [3] What Is a Multi-Modal Knowledge Graph: A Survey
    Peng, Jinghui
    Hu, Xinyu
    Huang, Wenbo
    Yang, Jian
    BIG DATA RESEARCH, 2023, 32
  • [4] MultiJAF: Multi-modal joint entity alignment framework for multi-modal knowledge graph
    Cheng, Bo
    Zhu, Jia
    Guo, Meimei
    NEUROCOMPUTING, 2022, 500 : 581 - 591
  • [5] MMKRL: A robust embedding approach for multi-modal knowledge graph representation learning
    Lu, Xinyu
    Wang, Lifang
    Jiang, Zejun
    He, Shichang
    Liu, Shizhong
    APPLIED INTELLIGENCE, 2022, 52 (07) : 7480 - 7497
  • [6] MMKRL: A robust embedding approach for multi-modal knowledge graph representation learning
    Xinyu Lu
    Lifang Wang
    Zejun Jiang
    Shichang He
    Shizhong Liu
    Applied Intelligence, 2022, 52 : 7480 - 7497
  • [7] A Comprehensive Approach to Early Detection of Workplace Stress with Multi-Modal Analysis and Explainable AI
    Upadhya, Jiblal
    Poudel, Khem
    Ranganathan, Jaishree
    PROCEEDINGS OF THE 2024 COMPUTERS AND PEOPLE RESEARCH CONFERENCE, SIGMIS-CPR 2024, 2024,
  • [8] Contrastive Multi-Modal Knowledge Graph Representation Learning
    Fang, Quan
    Zhang, Xiaowei
    Hu, Jun
    Wu, Xian
    Xu, Changsheng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (09) : 8983 - 8996
  • [9] MMEA: Entity Alignment for Multi-modal Knowledge Graph
    Chen, Liyi
    Li, Zhi
    Wang, Yijun
    Xu, Tong
    Wang, Zhefeng
    Chen, Enhong
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT (KSEM 2020), PT I, 2020, 12274 : 134 - 147
  • [10] NativE: Multi-modal Knowledge Graph Completion in the Wild
    Zhang, Yichi
    Chen, Zhuo
    Guo, Lingbing
    Xu, Yajing
    Hu, Binbin
    Liu, Ziqi
    Zhang, Wen
    Chen, Huajun
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 91 - 101