FTN-VQA: MULTIMODAL REASONING BY LEVERAGING A FULLY TRANSFORMER-BASED NETWORK FOR VISUAL QUESTION ANSWERING

被引:0
|
作者
Wang, Runmin [1 ]
Xu, Weixiang [1 ]
Zhu, Yanbin [1 ]
Zhu, Zhenlin [1 ]
Chen, Hua [1 ]
Ding, Yajun [1 ]
Liu, Jinping [1 ]
Gao, Changxin [2 ]
Sang, Nong [2 ]
机构
[1] Hunan Normal Univ, Inst Informat Sci & Engn, Changsha 410081, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
VQA; Transformer; Attention Mechanism; Multimodal Reasoning;
D O I
10.1142/S0218348X23401333
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Visual Question Answering (VQA) is a multimodal task, which requires understanding the information in the natural language questions and paying attention to the useful information in the images. So far, the solution of VQA tasks can be divided into grid-based methods and bottom-up-based methods. The grid-based method directly extracts the semantic features of the image by leveraging the convolution neural network (CNN), so it has a praiseworthy computational efficiency, but the global convolution feature ignores the information of the key area and causes the performance bottleneck. The bottom-up-based method needs to detect potentially problem-related objects by using some object detection frameworks, e.g. Faster RCNN, so it has better performance, but the computational efficiency is reduced due to the computation of Region Proposal Network (RPN) and Non-Maximum Suppression (NMS). Based on the aforementioned reasons, we propose a fully transformer-based network (FTN) that can maintain a balance between computational efficiency and accuracy, which can be trained end-to-end and consists of three components: question module, image module, and fusion module. Meanwhile, the image module and the question module are visualized to explore the operating rules of the transformer. The experiment results demonstrate that the FTN can focus on key information and objects in the question module and the image module, and our single model has reached 69.01% accuracy on the VQA2.0 dataset, which is superior to the grid-based methods. Although FTN fails to surpass a few state-of-the-art bottom-up-based methods, the FTN has obvious advantages in computational efficiency. The code will be released at https://github.com/weixiang-xu/FTN-VQA.git.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] DAM: Transformer-based relation detection for Question Answering over Knowledge Base
    Chen, Yongrui
    Li, Huiying
    [J]. KNOWLEDGE-BASED SYSTEMS, 2020, 201
  • [42] Visual Question Answering based on multimodal triplet knowledge accumuation
    Wang, Fengjuan
    An, Gaoyun
    [J]. 2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 81 - 84
  • [43] A question-guided multi-hop reasoning graph network for visual question answering
    Xu, Zhaoyang
    Gu, Jinguang
    Liu, Maofu
    Zhou, Guangyou
    Fu, Haidong
    Qiu, Chen
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (02)
  • [44] Efficient Multi-step Reasoning Attention Network for Visual Question Answering
    Zhang, Haotian
    Wu, Wei
    Zhang, Meng
    [J]. THIRTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2021), 2022, 12083
  • [45] Dual-decoder transformer network for answer grounding in visual question answering
    Zhu, Liangjun
    Peng, Li
    Zhou, Weinan
    Yang, Jielong
    [J]. PATTERN RECOGNITION LETTERS, 2023, 171 : 53 - 60
  • [46] Towards a question answering assistant for software development using a transformer-based language model
    Vale, Liliane do Nascimento
    Maia, Marcelo de Almeida
    [J]. 2021 IEEE/ACM THIRD INTERNATIONAL WORKSHOP ON BOTS IN SOFTWARE ENGINEERING (BOTSE 2021), 2021, : 39 - 42
  • [47] Investigating Questioner's Explicit Information Influences in Transformer-based Community Question Answering
    Maia, Macedo
    Endres, Markus
    [J]. 18TH IEEE INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING, ICSC 2024, 2024, : 93 - 100
  • [48] Visual question answering method based on relational reasoning and gating mechanism
    Wang X.
    Chen Q.-H.
    Sun Q.
    Jia Y.-B.
    [J]. Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2022, 56 (01): : 36 - 46
  • [49] Learning Hierarchical Reasoning for Text-Based Visual Question Answering
    Li, Caiyuan
    Du, Qinyi
    Wang, Qingqing
    Jin, Yaohui
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT III, 2021, 12893 : 305 - 316
  • [50] Hierarchical reasoning based on perception action cycle for visual question answering
    Mohamud, Safaa Abdullahi Moallim
    Jalali, Amin
    Lee, Minho
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 241