Multimodal Bi-direction Guided Attention Networks for Visual Question Answering

被引:0
|
作者
Cai, Linqin [1 ]
Xu, Nuoying [1 ]
Tian, Hang [1 ]
Chen, Kejia [2 ]
Fan, Haodu [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Res Ctr Artificial Intelligence & Smart Educ, Chongqing 400065, Peoples R China
[2] Chengdu Huawei Technol Co Ltd, Chengdu 500643, Peoples R China
基金
中国国家自然科学基金;
关键词
Visual question answering; Attention mechanism; Position attention; Deep learning; FUSION; KNOWLEDGE;
D O I
10.1007/s11063-023-11403-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current visual question answering (VQA) has become a research hotspot in the computer vision and natural language processing field. A core solution of VQA is how to fuse multi-modal features from images and questions. This paper proposes a Multimodal Bi-direction Guided Attention Network (MBGAN) for VQA by combining visual relationships and attention to achieve more refined feature fusion. Specifically, the self-attention is used to extract image features and text features, the guided-attention is applied to obtain the correlation between each image area and the related question. To obtain the relative position relationship of different objects, position attention is further introduced to realize relationship correlation modeling and enhance the matching ability of multi-modal features. Given an image and a natural language question, the proposed MBGAN learns visual relation inference and question attention networks in parallel to achieve the fine-grained fusion of the visual features and the textual features, then the final answers can be obtained accurately through model stacking. MBGAN achieves 69.41% overall accuracy on the VQA-v1 dataset, 70.79% overall accuracy on the VQA-v2 dataset, and 68.79% overall accuracy on the COCO-QA dataset, which shows that the proposed MBGAN outperforms most of the state-of-the-art models.
引用
收藏
页码:11921 / 11943
页数:23
相关论文
共 50 条
  • [21] Deep Multimodal Reinforcement Network with Contextually Guided Recurrent Attention for Image Question Answering
    Jiang, Ai-Wen
    Liu, Bo
    Wang, Ming-Wen
    [J]. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2017, 32 (04): : 738 - 748
  • [22] Deep Multimodal Reinforcement Network with Contextually Guided Recurrent Attention for Image Question Answering
    Ai-Wen Jiang
    Bo Liu
    Ming-Wen Wang
    [J]. Journal of Computer Science and Technology, 2017, 32 : 738 - 748
  • [23] Hierarchical Attention Networks for Fact-based Visual Question Answering
    Haibo Yao
    Yongkang Luo
    Zhi Zhang
    Jianhang Yang
    Chengtao Cai
    [J]. Multimedia Tools and Applications, 2024, 83 : 17281 - 17298
  • [24] Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering
    Xu, Huijuan
    Saenko, Kate
    [J]. COMPUTER VISION - ECCV 2016, PT VII, 2016, 9911 : 451 - 466
  • [25] An Effective Dense Co-Attention Networks for Visual Question Answering
    He, Shirong
    Han, Dezhi
    [J]. SENSORS, 2020, 20 (17) : 1 - 15
  • [26] Hierarchical Attention Networks for Fact-based Visual Question Answering
    Yao, Haibo
    Luo, Yongkang
    Zhang, Zhi
    Yang, Jianhang
    Cai, Chengtao
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (06) : 17281 - 17298
  • [27] Deep Modular Co-Attention Networks for Visual Question Answering
    Yu, Zhou
    Yu, Jun
    Cui, Yuhao
    Tao, Dacheng
    Tian, Qi
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6274 - 6283
  • [28] Positional Attention Guided Transformer-Like Architecture for Visual Question Answering
    Mao, Aihua
    Yang, Zhi
    Lin, Ken
    Xuan, Jun
    Liu, Yong-Jin
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6997 - 7009
  • [29] Dual self-attention with co-attention networks for visual question answering
    Liu, Yun
    Zhang, Xiaoming
    Zhang, Qianyun
    Li, Chaozhuo
    Huang, Feiran
    Tang, Xianghong
    Li, Zhoujun
    [J]. PATTERN RECOGNITION, 2021, 117
  • [30] Question -Led object attention for visual question answering
    Gao, Lianli
    Cao, Liangfu
    Xu, Xing
    Shao, Jie
    Song, Jingkuan
    [J]. NEUROCOMPUTING, 2020, 391 : 227 - 233