VCD: Visual Causality Discovery for Cross-Modal Question Reasoning

被引:0
|
作者
Liu, Yang [1 ]
Tan, Ying [1 ]
Luo, Jingzhou [1 ]
Chen, Weixing [1 ]
机构
[1] Sun Yat Sen Univ, Guangzhou, Peoples R China
关键词
Visual Question Answering; Visual-linguistic; Causal Inference;
D O I
10.1007/978-981-99-8540-1_25
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing visual question reasoning methods usually fail to explicitly discover the inherent causal mechanism and ignore jointly modeling cross-modal event temporality and causality. In this paper, we propose a visual question reasoning framework named Cross-Modal Question Reasoning (CMQR), to discover temporal causal structure and mitigate visual spurious correlation by causal intervention. To explicitly discover visual causal structure, the Visual Causality Discovery (VCD) architecture is proposed to find question-critical scene temporally and disentangle the visual spurious correlations by attention-based front-door causal intervention module named Local-Global Causal Attention Module (LGCAM). To align the fine-grained interactions between linguistic semantics and spatial-temporal representations, we build an Interactive Visual-Linguistic Transformer (IVLT) that builds the multi-modal co-occurrence interactions between visual and linguistic content. Extensive experiments on four datasets demonstrate the superiority of CMQR for discovering visual causal structures and achieving robust question reasoning. The supplementary file can be referred to https://github.com/YangLiu9208/VCD/blob/main/0793_supp.pdf.
引用
下载
收藏
页码:309 / 322
页数:14
相关论文
共 50 条
  • [11] Cross-Modal Retrieval for Knowledge-Based Visual Question Answering
    Lerner, Paul
    Ferret, Olivier
    Guinaudeau, Camille
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT I, 2024, 14608 : 421 - 438
  • [12] Visual question answering with attention transfer and a cross-modal gating mechanism
    Li, Wei
    Sun, Jianhui
    Liu, Ge
    Zhao, Linglan
    Fang, Xiangzhong
    PATTERN RECOGNITION LETTERS, 2020, 133 (133) : 334 - 340
  • [13] Cross-Modal Dense Passage Retrieval for Outside Knowledge Visual Question Answering
    Reichman, Benjamin
    Heck, Larry
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 2829 - 2834
  • [14] Visual Question Generation Under Multi-granularity Cross-Modal Interaction
    Chai, Zi
    Wan, Xiaojun
    Han, Soyeon Caren
    Poon, Josiah
    MULTIMEDIA MODELING, MMM 2023, PT I, 2023, 13833 : 255 - 266
  • [15] Jointly Learning Attentions with Semantic Cross-Modal Correlation for Visual Question Answering
    Cao, Liangfu
    Gao, Lianli
    Song, Jingkuan
    Xu, Xing
    Shen, Heng Tao
    DATABASES THEORY AND APPLICATIONS, ADC 2017, 2017, 10538 : 248 - 260
  • [16] Medical visual question answering with symmetric interaction attention and cross-modal gating
    Chen, Zhi
    Zou, Beiji
    Dai, Yulan
    Zhu, Chengzhang
    Kong, Guilan
    Zhang, Wensheng
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 85
  • [17] Visual determinants of a cross-modal illusion
    James A. Armontrout
    Michael Schiutz
    Michael Kubovy
    Attention, Perception, & Psychophysics, 2009, 71 : 1618 - 1627
  • [18] Visual determinants of a cross-modal illusion
    Armontrout, James A.
    Schutz, Michael
    Kubovy, Michael
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2009, 71 (07) : 1618 - 1627
  • [19] Cross-modal orienting of visual attention
    Hillyard, Steven A.
    Stoermer, Viola S.
    Feng, Wenfeng
    Martinez, Antigona
    McDonald, John J.
    NEUROPSYCHOLOGIA, 2016, 83 : 170 - 178
  • [20] Cross-modal visual and vibrotactile tracking
    van Erp, JBF
    Verschoor, MH
    APPLIED ERGONOMICS, 2004, 35 (02) : 105 - 112