Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering

被引:12
|
作者
Cao, Jianjian [1 ]
Qin, Xiameng [2 ]
Zhao, Sanyuan [1 ]
Shen, Jianbing [3 ]
机构
[1] Beijing Inst Technol, Dept Comp Sci, Beijing 100081, Peoples R China
[2] Baidu Inc, Beijing 100193, Peoples R China
[3] Univ Macau, Dept Comp & Informat Sci, State Key Lab Internet Things Smart City, Macau, Peoples R China
基金
中国国家自然科学基金;
关键词
Visualization; Cognition; Task analysis; Semantics; Syntactics; Deep learning; Prediction algorithms; Graph matching attention (GMA); relational reasoning; visual question answering (VQA); NETWORKS;
D O I
10.1109/TNNLS.2021.3135655
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Answering semantically complicated questions according to an image is challenging in a visual question answering (VQA) task. Although the image can be well represented by deep learning, the question is always simply embedded and cannot well indicate its meaning. Besides, the visual and textual features have a gap for different modalities, it is difficult to align and utilize the cross-modality information. In this article, we focus on these two problems and propose a graph matching attention (GMA) network. First, it not only builds graph for the image but also constructs graph for the question in terms of both syntactic and embedding information. Next, we explore the intramodality relationships by a dual-stage graph encoder and then present a bilateral cross-modality GMA to infer the relationships between the image and the question. The updated cross-modality features are then sent into the answer prediction module for final answer prediction. Experiments demonstrate that our network achieves the state-of-the-art performance on the GQA dataset and the VQA 2.0 dataset. The ablation studies verify the effectiveness of each module in our GMA network.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Cross-modality image feature fusion diagnosis in breast cancer
    Jiang, Mingkuan
    Han, Lu
    Sun, Hang
    Li, Jing
    Bao, Nan
    Li, Hong
    Zhou, Shi
    Yu, Tao
    [J]. PHYSICS IN MEDICINE AND BIOLOGY, 2021, 66 (10):
  • [22] Collaborative Modality Fusion for Mitigating Language Bias in Visual Question Answering
    Lu, Qiwen
    Chen, Shengbo
    Zhu, Xiaoke
    [J]. JOURNAL OF IMAGING, 2024, 10 (03)
  • [23] ContextMatcher: Detector-Free Feature Matching With Cross-Modality Context
    Li, Dongyue
    Du, Songlin
    [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (09) : 7922 - 7934
  • [24] Self-supervised Contrastive Cross-Modality Representation Learning for Spoken Question Answering
    You, Chenyu
    Chen, Nuo
    Zou, Yuexian
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 28 - 39
  • [25] Prior expectations in cross-modality matching
    Laming, D
    [J]. MATHEMATICAL SOCIAL SCIENCES, 1999, 38 (03) : 343 - 359
  • [26] Cross-Modal Multistep Fusion Network With Co-Attention for Visual Question Answering
    Lao, Mingrui
    Guo, Yanming
    Wang, Hui
    Zhang, Xin
    [J]. IEEE ACCESS, 2018, 6 : 31516 - 31524
  • [27] CROSS-MODALITY MATCHING OF BRIGHTNESS AND LOUDNESS
    STEVENS, JC
    MARKS, LE
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 1965, 54 (02) : 407 - &
  • [28] SUBJECT DIFFERENCES IN CROSS-MODALITY MATCHING
    RULE, SJ
    MARKLEY, RP
    [J]. PERCEPTION & PSYCHOPHYSICS, 1971, 9 (1B): : 115 - &
  • [29] CROSS-MODALITY MATCHING OF NUMEROSITY AND PITCH
    ABBEY, DS
    [J]. CANADIAN JOURNAL OF PSYCHOLOGY, 1962, 16 (04): : 283 - 290
  • [30] CROSS-MODALITY MATCHING IN DECISION MAKING
    HICKS, RG
    [J]. JOURNAL OF AUDITORY RESEARCH, 1969, 9 (03): : 200 - 206