Multi-modal adaptive gated mechanism for visual question answering

被引:2
|
作者
Xu, Yangshuyi [1 ]
Zhang, Lin [1 ]
Shen, Xiang [1 ]
机构
[1] Shanghai Maritime Univ, Coll Informat Engn, Shanghai, Peoples R China
来源
PLOS ONE | 2023年 / 18卷 / 06期
关键词
ATTENTION; FUSION;
D O I
10.1371/journal.pone.0287557
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Visual Question Answering (VQA) is a multimodal task that uses natural language to ask and answer questions based on image content. For multimodal tasks, obtaining accurate modality feature information is crucial. The existing researches on the visual question answering model mainly start from the perspective of attention mechanism and multimodal fusion, which will tend to ignore the impact of modal interaction learning and the introduction of noise information in the process of modal fusion on the overall performance of the model. This paper proposes a novel and efficient multimodal adaptive gated mechanism model, MAGM. The model adds an adaptive gate mechanism to the intra- and inter-modality learning and the modal fusion process. This model can effectively filter irrelevant noise information, obtain fine-grained modal features, and improve the ability of the model to adaptively control the contribution of the two modal features to the predicted answer. In intra- and inter-modality learning modules, the self-attention gated and self-guided-attention gated units are designed to filter text and image features' noise information effectively. In modal fusion module, the adaptive gated modal feature fusion structure is designed to obtain fine-grained modal features and improve the accuracy of the model in answering questions. Quantitative and qualitative experiments on the two VQA task benchmark datasets, VQA 2.0 and GQA, proved that the method in this paper is superior to the existing methods. The MAGM model has an overall accuracy of 71.30% on the VQA 2.0 dataset and an overall accuracy of 57.57% on the GQA dataset.
引用
下载
收藏
页数:24
相关论文
共 50 条
  • [21] Decouple Before Interact: Multi-Modal Prompt Learning for Continual Visual Question Answering
    Qian, Zi
    Wang, Xin
    Duan, Xuguang
    Qin, Pengda
    Li, Yuhong
    Zhu, Wenwu
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2941 - 2950
  • [22] Knowledge-Based Visual Question Answering Using Multi-Modal Semantic Graph
    Jiang, Lei
    Meng, Zuqiang
    ELECTRONICS, 2023, 12 (06)
  • [23] Differentiated Attention with Multi-modal Reasoning for Video Question Answering
    Yao, Shentao
    Li, Kun
    Xing, Kun
    Wu, Kewei
    Xie, Zhao
    Guo, Dan
    2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, BIG DATA AND ALGORITHMS (EEBDA), 2022, : 525 - 530
  • [24] Holistic Multi-Modal Memory Network for Movie Question Answering
    Wang, Anran
    Anh Tuan Luu
    Foo, Chuan-Sheng
    Zhu, Hongyuan
    Tay, Yi
    Chandrasekhar, Vijay
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 489 - 499
  • [25] Answer-checking in Context: A Multi-modal Fully Attention Network for Visual Question Answering
    Huang, Hantao
    Han, Tao
    Han, Wei
    Yap, Deep
    Chiang, Cheng-Ming
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 1173 - 1180
  • [26] Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question Answering
    Yu, Zhou
    Yu, Jun
    Fan, Jianping
    Tao, Dacheng
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1839 - 1848
  • [27] Interpretable medical image Visual Question Answering via multi-modal relationship graph learning
    Hu, Xinyue
    Gu, Lin
    Kobayashi, Kazuma
    Liu, Liangchen
    Zhang, Mengliang
    Harada, Tatsuya
    Summers, Ronald M.
    Zhu, Yingying
    MEDICAL IMAGE ANALYSIS, 2024, 97
  • [28] NuScenes-QA: A Multi-Modal Visual Question Answering Benchmark for Autonomous Driving Scenario
    Qian, Tianwen
    Chen, Jingjing
    Zhuo, Linhai
    Jiao, Yang
    Jiang, Yu-Gang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4542 - 4550
  • [29] Medical Visual Question-Answering Model Based on Knowledge Enhancement and Multi-Modal Fusion
    Zhang, Dianyuan
    Yu, Chuanming
    An, Lu
    Proceedings of the Association for Information Science and Technology, 2024, 61 (01) : 703 - 708
  • [30] Advancing Video Question Answering with a Multi-modal and Multi-layer Question Enhancement Network
    Liu, Meng
    Zhang, Fenglei
    Luo, Xin
    Liu, Fan
    Wei, Yinwei
    Nie, Liqiang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3985 - 3993