Advancing Vietnamese Visual Question Answering with Transformer and Convolutional

被引:0
|
作者
Nguyen, Ngoc Son [1 ,3 ]
Nguyen, Van Son [1 ,3 ]
Le, Tung [2 ,3 ]
机构
[1] Univ Sci, Fac Math & Comp Sci, Ho Chi Minh, Vietnam
[2] Univ Sci, Fac Informat Technol, Ho Chi Minh, Vietnam
[3] Vietnam Natl Univ, Ho Chi Minh, Vietnam
关键词
Visual question answering; ViVQA; EfficientNet; BLIP-2; Convolutional;
D O I
10.1016/j.compeleceng.2024.109474
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Visual Question Answering (VQA) has recently emerged as a potential research domain, captivating the interest of many in the field of artificial intelligence and computer vision. Despite the prevalence of approaches in English, there is a notable lack of systems specifically developed for certain languages, particularly Vietnamese. This study aims to bridge this gap by conducting comprehensive experiments on the Vietnamese Visual Question Answering (ViVQA) dataset, demonstrating the effectiveness of our proposed model. In response to community interest, we have developed a model that enhances image representation capabilities, thereby improving overall performance in the ViVQA system. Therefore, we propose AViVQA-TranConI (Advancing A dvancing Vi etnamese V isual Q uestion A nswering with T ransformer and Con volutional I ntegration). AViVQA-TranConI integrates the Bootstrapping Language-Image Pre-training with frozen unimodal models (BLIP-2) and the convolutional neural network EfficientNet to extract and process both local and global features from images. This integration leverages the strengths of transformer-based architectures for capturing comprehensive contextual information and convolutional networks for detailed local features. By freezing the parameters of these pre-trained models, we significantly reduce the computational cost and training time, while maintaining high performance. This approach significantly improves image representation and enhances the performance of existing VQA systems. We then leverage a multi-modal fusion module based on a general-purpose multi-modal foundation model (BEiT-3) to fuse the information between visual and textual features. Our experimental findings demonstrate that AViVQA-TranConI surpasses competing baselines, achieving promising performance. This is particularly evident in its accuracy of 71.04% on the test set of the ViVQA dataset, marking a significant advancement in our research area. The code is available at https://github.com/nngocson2002/ViVQA.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Co-attention graph convolutional network for visual question answering
    Chuan Liu
    Ying-Ying Tan
    Tian-Tian Xia
    Jiajing Zhang
    Ming Zhu
    Multimedia Systems, 2023, 29 : 2527 - 2543
  • [22] Adaptive sparse triple convolutional attention for enhanced visual question answering
    Wang, Ronggui
    Chen, Hong
    Yang, Juan
    Xue, Lixia
    VISUAL COMPUTER, 2025,
  • [23] Visual Question Answering
    Nada, Ahmed
    Chen, Min
    2024 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2024, : 6 - 10
  • [24] Combining Multi-vision Embedding in Contextual Attention for Vietnamese Visual Question Answering
    Anh Duc Nguyen
    Tung Le
    Huy Tien Nguyen
    IMAGE AND VIDEO TECHNOLOGY 2022, PSIVT 2022, 2023, 13763 : 172 - 185
  • [25] Question Modifiers in Visual Question Answering
    Britton, William
    Sarkhel, Somdeb
    Venugopal, Deepak
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 1472 - 1479
  • [26] An Experimental Study of Vietnamese Question Answering System
    Vu Mai Tran
    Vinh Duc Nguyen
    Oanh Thi Tran
    Uyen Thu Thi Pham
    Thuy-Quang Ha
    2009 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, 2009, : 152 - 155
  • [27] Semantic Parsing for Vietnamese Question Answering System
    Vu Xuan Tung
    Nguyen Le Minh
    Duc Tam Hoang
    2015 Seventh International Conference on Knowledge and Systems Engineering (KSE), 2015, : 332 - 335
  • [28] Surgical-VQA: Visual Question Answering in Surgical Scenes Using Transformer
    Seenivasan, Lalithkumar
    Islam, Mobarakol
    Krishna, Adithya K.
    Ren, Hongliang
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VII, 2022, 13437 : 33 - 43
  • [29] Multi-Modal Fusion Transformer for Visual Question Answering in Remote Sensing
    Siebert, Tim
    Clasen, Kai Norman
    Ravanbakhsh, Mahdyar
    Demir, Beguem
    IMAGE AND SIGNAL PROCESSING FOR REMOTE SENSING XXVIII, 2022, 12267
  • [30] ST-VQA: shrinkage transformer with accurate alignment for visual question answering
    Xia, Haiying
    Lan, Richeng
    Li, Haisheng
    Song, Shuxiang
    APPLIED INTELLIGENCE, 2023, 53 (18) : 20967 - 20978