FTN-VQA: MULTIMODAL REASONING BY LEVERAGING A FULLY TRANSFORMER-BASED NETWORK FOR VISUAL QUESTION ANSWERING

被引:0
|
作者
Wang, Runmin [1 ]
Xu, Weixiang [1 ]
Zhu, Yanbin [1 ]
Zhu, Zhenlin [1 ]
Chen, Hua [1 ]
Ding, Yajun [1 ]
Liu, Jinping [1 ]
Gao, Changxin [2 ]
Sang, Nong [2 ]
机构
[1] Hunan Normal Univ, Inst Informat Sci & Engn, Changsha 410081, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
VQA; Transformer; Attention Mechanism; Multimodal Reasoning;
D O I
10.1142/S0218348X23401333
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Visual Question Answering (VQA) is a multimodal task, which requires understanding the information in the natural language questions and paying attention to the useful information in the images. So far, the solution of VQA tasks can be divided into grid-based methods and bottom-up-based methods. The grid-based method directly extracts the semantic features of the image by leveraging the convolution neural network (CNN), so it has a praiseworthy computational efficiency, but the global convolution feature ignores the information of the key area and causes the performance bottleneck. The bottom-up-based method needs to detect potentially problem-related objects by using some object detection frameworks, e.g. Faster RCNN, so it has better performance, but the computational efficiency is reduced due to the computation of Region Proposal Network (RPN) and Non-Maximum Suppression (NMS). Based on the aforementioned reasons, we propose a fully transformer-based network (FTN) that can maintain a balance between computational efficiency and accuracy, which can be trained end-to-end and consists of three components: question module, image module, and fusion module. Meanwhile, the image module and the question module are visualized to explore the operating rules of the transformer. The experiment results demonstrate that the FTN can focus on key information and objects in the question module and the image module, and our single model has reached 69.01% accuracy on the VQA2.0 dataset, which is superior to the grid-based methods. Although FTN fails to surpass a few state-of-the-art bottom-up-based methods, the FTN has obvious advantages in computational efficiency. The code will be released at https://github.com/weixiang-xu/FTN-VQA.git.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] A Transformer-based Medical Visual Question Answering Model
    Liu, Lei
    Su, Xiangdong
    Guo, Hui
    Zhu, Daobin
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 1712 - 1718
  • [2] TRANS-VQA: Fully Transformer-Based Image Question-Answering Model Using Question-guided Vision Attention
    Koshti, Dipali
    Gupta, Ashutosh
    Kalla, Mukesh
    Sharma, Arvind
    [J]. INTELIGENCIA ARTIFICIAL-IBEROAMERICAL JOURNAL OF ARTIFICIAL INTELLIGENCE, 2024, 27 (73): : 111 - 128
  • [3] VQA-GNN: Reasoning with Multimodal Knowledge via Graph Neural Networks for Visual Question Answering
    Wang, Yanan
    Yasunaga, Michihiro
    Ren, Hongyu
    Wada, Shinya
    Leskovec, Jure
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 21525 - 21535
  • [4] Transformer-Based Neural Network for Answer Selection in Question Answering
    Shao, Taihua
    Guo, Yupu
    Chen, Honghui
    Hao, Zepeng
    [J]. IEEE ACCESS, 2019, 7 : 26146 - 26156
  • [5] Multimodal Learning and Reasoning for Visual Question Answering
    Ilievski, Ilija
    Feng, Jiashi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [6] A lightweight Transformer-based visual question answering network with Weight-Sharing Hybrid Attention
    Zhu, Yue
    Chen, Dongyue
    Jia, Tong
    Deng, Shizhuo
    [J]. NEUROCOMPUTING, 2024, 608
  • [7] Transformer-based Sparse Encoder and Answer Decoder for Visual Question Answering
    Peng, Longkun
    An, Gaoyun
    Ruan, Qiuqi
    [J]. 2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 120 - 123
  • [8] Surgical-VQA: Visual Question Answering in Surgical Scenes Using Transformer
    Seenivasan, Lalithkumar
    Islam, Mobarakol
    Krishna, Adithya K.
    Ren, Hongliang
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VII, 2022, 13437 : 33 - 43
  • [9] Multimodal Knowledge Reasoning for Enhanced Visual Question Answering
    Hussain, Afzaal
    Maqsood, Ifrah
    Shahzad, Muhammad
    Fraz, Muhammad Moazam
    [J]. 2022 16TH INTERNATIONAL CONFERENCE ON SIGNAL-IMAGE TECHNOLOGY & INTERNET-BASED SYSTEMS, SITIS, 2022, : 224 - 230
  • [10] MUREL: Multimodal Relational Reasoning for Visual Question Answering
    Cadene, Remi
    Ben-younes, Hedi
    Cord, Matthieu
    Thome, Nicolas
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1989 - 1998