Multi-modal adaptive gated mechanism for visual question answering

被引:2
|
作者
Xu, Yangshuyi [1 ]
Zhang, Lin [1 ]
Shen, Xiang [1 ]
机构
[1] Shanghai Maritime Univ, Coll Informat Engn, Shanghai, Peoples R China
来源
PLOS ONE | 2023年 / 18卷 / 06期
关键词
ATTENTION; FUSION;
D O I
10.1371/journal.pone.0287557
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Visual Question Answering (VQA) is a multimodal task that uses natural language to ask and answer questions based on image content. For multimodal tasks, obtaining accurate modality feature information is crucial. The existing researches on the visual question answering model mainly start from the perspective of attention mechanism and multimodal fusion, which will tend to ignore the impact of modal interaction learning and the introduction of noise information in the process of modal fusion on the overall performance of the model. This paper proposes a novel and efficient multimodal adaptive gated mechanism model, MAGM. The model adds an adaptive gate mechanism to the intra- and inter-modality learning and the modal fusion process. This model can effectively filter irrelevant noise information, obtain fine-grained modal features, and improve the ability of the model to adaptively control the contribution of the two modal features to the predicted answer. In intra- and inter-modality learning modules, the self-attention gated and self-guided-attention gated units are designed to filter text and image features' noise information effectively. In modal fusion module, the adaptive gated modal feature fusion structure is designed to obtain fine-grained modal features and improve the accuracy of the model in answering questions. Quantitative and qualitative experiments on the two VQA task benchmark datasets, VQA 2.0 and GQA, proved that the method in this paper is superior to the existing methods. The MAGM model has an overall accuracy of 71.30% on the VQA 2.0 dataset and an overall accuracy of 57.57% on the GQA dataset.
引用
下载
收藏
页数:24
相关论文
共 50 条
  • [41] Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering
    Lin, Weizhe
    Chen, Jinghong
    Mei, Jingbiao
    Coca, Alexandru
    Byrne, Bill
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [42] Text-Guided Object Detector for Multi-modal Video Question Answering
    Shen, Ruoyue
    Inoue, Nakamasa
    Shinoda, Koichi
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 1032 - 1042
  • [43] Open-Ended Multi-Modal Relational Reasoning for Video Question Answering
    Luo, Haozheng
    Qin, Ruiyang
    Xu, Chenwei
    Ye, Guo
    Luo, Zening
    2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 363 - 369
  • [44] Visual question answering with attention transfer and a cross-modal gating mechanism
    Li, Wei
    Sun, Jianhui
    Liu, Ge
    Zhao, Linglan
    Fang, Xiangzhong
    PATTERN RECOGNITION LETTERS, 2020, 133 (133) : 334 - 340
  • [45] Temporally Multi-Modal Semantic Reasoning with Spatial Language Constraints for Video Question Answering
    Liu, Mingyang
    Wang, Ruomei
    Zhou, Fan
    Lin, Ge
    SYMMETRY-BASEL, 2022, 14 (06):
  • [46] ESSAY-ANCHOR ATTENTIVE MULTI-MODAL BILINEAR POOLING FOR TEXTBOOK QUESTION ANSWERING
    Li, Juzheng
    Su, Hang
    Zhu, Jun
    Zhang, Bo
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [47] MRD-Net: Multi-Modal Residual Knowledge Distillation for Spoken Question Answering
    You, Chenyu
    Chen, Nuo
    Zou, Yuexian
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3985 - 3991
  • [48] Gaining Extra Supervision via Multi-task learning for Multi-Modal Video Question Answering
    Kim, Junyeong
    Ma, Minuk
    Kim, Kyungsu
    Kim, Sungjin
    Yoo, Chang D.
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [49] Co-Attending Free-Form Regions and Detections with Multi-Modal Multiplicative Feature Embedding for Visual Question Answering
    Lu, Pan
    Li, Hongsheng
    Zhang, Wei
    Wang, Jianyong
    Wang, Xiaogang
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 7218 - 7225
  • [50] Multi-Modal Correlated Network with Emotional Reasoning Knowledge for Social Intelligence Question-Answering
    Xie, Baijun
    Park, Chung Hyuk
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3067 - 3073