Multi-modal spatial relational attention networks for visual question answering

被引:5
|
作者
Yao, Haibo [1 ]
Wang, Lipeng [1 ]
Cai, Chengtao [1 ]
Sun, Yuxin [1 ]
Zhang, Zhi [1 ]
Luo, Yongkang [2 ]
机构
[1] Harbin Engn Univ, Coll Intelligent Syst Sci & Engn, Harbin 150001, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
基金
中国国家自然科学基金; 黑龙江省自然科学基金;
关键词
Visual question answering; Spatial relation; Attention mechanism; Pre -training strategy;
D O I
10.1016/j.imavis.2023.104840
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visual Question Answering (VQA) is a task that requires VQA model to fully understand the visual information of the image and the language information of the question, and then combine both to provide an answer. Recently, a large amount of VQA approaches focus on modeling intra- and inter-modal interactions with respect to vision and language using a deep modular co-attention network, which can achieve a good performance. Despite their benefits, they also have their limitations. First, the question representation is obtained through Glove word embeddings and Recurrent Neural Network, which may not be sufficient to capture the intricate semantics of the question features. Second, they mostly use visual appearance features extracted by Faster R-CNN to interact with language features, and they ignore important spatial relations between objects in images, resulting in incomplete use of image information. To overcome the limitations of previous methods, we propose a novel Multi-modal Spatial Relation Attention Network (MSRAN) for VQA, which can introduce spatial relationships between objects to fully utilize the image information, thus improving the performance of VQA. In order to achieve the above, we design two types of spatial relational attention modules to comprehensively explore the attention schemes: (i) Self-Attention based on Explicit Spatial Relation (SA-ESR) module that explores geometric relationships between objects explicitly; and (ii) Self-Attention based on Implicit Spatial Relation (SA-ISR) module that can capture the hidden dynamic relationships between objects by using spatial relationship. Moreover, the pre-training model BERT, which replaces Glove word embeddings and Recurrent Neural Network, is applied to MSRAN in order to obtain the better question representation. Extensive experiments on two large benchmark datasets, VQA 2.0 and GQA, demonstrate that our proposed model achieves the state-of-the-art performance.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Multi-modal Knowledge-aware Hierarchical Attention Network for Explainable Medical Question Answering
    Zhang, Yingying
    Qian, Shengsheng
    Fang, Quan
    Xu, Changsheng
    [J]. PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1089 - 1097
  • [32] Multi-source Multi-level Attention Networks for Visual Question Answering
    Yu, Dongfei
    Fu, Jianlong
    Tian, Xinmei
    Mei, Tao
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2019, 15 (02)
  • [33] Object-Difference Attention: A Simple Relational Attention for Visual Question Answering
    Wu, Chenfei
    Liu, Jinlai
    Wang, Xiaojie
    Dong, Xuan
    [J]. PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 519 - 527
  • [34] Cross-modal Relational Reasoning Network for Visual Question Answering
    Chen, Hongyu
    Liu, Ruifang
    Peng, Bo
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 3939 - 3948
  • [35] Holistic Multi-Modal Memory Network for Movie Question Answering
    Wang, Anran
    Anh Tuan Luu
    Foo, Chuan-Sheng
    Zhu, Hongyuan
    Tay, Yi
    Chandrasekhar, Vijay
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 489 - 499
  • [36] Interpretable medical image Visual Question Answering via multi-modal relationship graph learning
    Hu, Xinyue
    Gu, Lin
    Kobayashi, Kazuma
    Liu, Liangchen
    Zhang, Mengliang
    Harada, Tatsuya
    Summers, Ronald M.
    Zhu, Yingying
    [J]. MEDICAL IMAGE ANALYSIS, 2024, 97
  • [37] NuScenes-QA: A Multi-Modal Visual Question Answering Benchmark for Autonomous Driving Scenario
    Qian, Tianwen
    Chen, Jingjing
    Zhuo, Linhai
    Jiao, Yang
    Jiang, Yu-Gang
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4542 - 4550
  • [38] Medical Visual Question-Answering Model Based on Knowledge Enhancement and Multi-Modal Fusion
    Zhang, Dianyuan
    Yu, Chuanming
    An, Lu
    [J]. Proceedings of the Association for Information Science and Technology, 2024, 61 (01) : 703 - 708
  • [39] Multimodal feature fusion by relational reasoning and attention for visual question answering
    Zhang, Weifeng
    Yu, Jing
    Hu, Hua
    Hu, Haiyang
    Qin, Zengchang
    [J]. INFORMATION FUSION, 2020, 55 (55) : 116 - 126
  • [40] Open-Ended Video Question Answering via Multi-Modal Conditional Adversarial Networks
    Zhao, Zhou
    Xiao, Shuwen
    Song, Zehan
    Lu, Chujie
    Xiao, Jun
    Zhuang, Yueting
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 3859 - 3870