ECENet: Explainable and Context-Enhanced Network for Multi-modal Fact Verification

被引:1
|
作者
Zhang, Fanrui [1 ]
Liu, Jiawei [1 ]
Zhang, Qiang [1 ]
Sun, Esther [2 ]
Xie, Jingyi [1 ]
Zha, Zheng-Jun [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Anhui, Peoples R China
[2] Univ Toronto, Toronto, ON, Canada
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Muti-modal fact verification; Attention mechanism; Deep reinforcement learning; Interpretability;
D O I
10.1145/3581783.3612183
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, falsified claims incorporating both text and images have been disseminated more effectively than those containing text alone, raising significant concerns for multi-modal fact verification. Existing research makes contributions to multi-modal feature extraction and interaction, but fails to fully utilize and enhance the valuable and intricate semantic relationships between distinct features. Moreover, most detectors merely provide a single outcome judgment and lack an inference process or explanation. Taking these factors into account, we propose a novel Explainable and Context-Enhanced Network (ECENet) for multi-modal fact verification, making the first attempt to integrate multi-clue feature extraction, multi-level feature reasoning, and justification (explanation) generation within a unified framework. Specifically, we propose an Improved Coarse- and Fine-grained Attention Network, equipped with two types of level-grained attention mechanisms, to facilitate a comprehensive understanding of contextual information. Furthermore, we propose a novel justification generation module via deep reinforcement learning that does not require additional labels. In this module, a sentence extractor agent measures the importance between the query claim and all document sentences at each time step, selecting a suitable amount of high-scoring sentences to be rewritten as the explanation of the model. Extensive experiments demonstrate the effectiveness of the proposed method.
引用
收藏
页码:1231 / 1240
页数:10
相关论文
共 50 条
  • [31] Mineral: Multi-modal Network Representation Learning
    Kefato, Zekarias T.
    Sheikh, Nasrullah
    Montresor, Alberto
    MACHINE LEARNING, OPTIMIZATION, AND BIG DATA, MOD 2017, 2018, 10710 : 286 - 298
  • [32] A Multi-Modal Transformer network for action detection
    Korban, Matthew
    Youngs, Peter
    Acton, Scott T.
    PATTERN RECOGNITION, 2023, 142
  • [33] Towards Multi-Modal Context Recognition for Hearing Instruments
    Tessendorf, Bernd
    Bulling, Andreas
    Roggen, Daniel
    Stiefmeier, Thomas
    Troster, Gerhard
    Feilner, Manuela
    Derleth, Peter
    INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS (ISWC) 2010, 2010,
  • [34] Deep Robust Unsupervised Multi-Modal Network
    Yang, Yang
    Wu, Yi-Feng
    Zhan, De-Chuan
    Liu, Zhi-Bin
    Jiang, Yuan
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 5652 - 5659
  • [35] Self-supervised multi-modal fusion network for multi-modal thyroid ultrasound image diagnosis
    Xiang, Zhuo
    Zhuo, Qiuluan
    Zhao, Cheng
    Deng, Xiaofei
    Zhu, Ting
    Wang, Tianfu
    Jiang, Wei
    Lei, Baiying
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 150
  • [36] Multi-modal Siamese Network for Entity Alignment
    Chen, Liyi
    Li, Zhi
    Xu, Tong
    Wu, Han
    Wang, Zhefeng
    Yuan, Nicholas Jing
    Chen, Enhong
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 118 - 126
  • [37] Mobile Based Online Signature Verification for Multi-modal Authentication
    Forhad, Navid
    Poon, Bruce
    Amin, M. Ashraful
    Yan, Hong
    ENGINEERING LETTERS, 2015, 23 (04) : 292 - 298
  • [38] Structurally noise resistant classifier for multi-modal person verification
    Sanderson, C
    Paliwal, KK
    PATTERN RECOGNITION LETTERS, 2003, 24 (16) : 3089 - 3099
  • [39] Combining sclera and periocular features for multi-modal identity verification
    Oh, Kangrok
    Oh, Beorn-Seok
    Toh, Kar-Ann
    Yau, Wei-Yun
    Eng, How-Lung
    NEUROCOMPUTING, 2014, 128 : 185 - 198
  • [40] An enhanced artificial neural network for hand gesture recognition using multi-modal features
    Uke, Shailaja N.
    Zade, Amol V.
    COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION, 2023, 11 (06): : 2278 - 2289