Multi-Modal Alignment of Visual Question Answering Based on Multi-Hop Attention Mechanism

被引:4
|
作者
Xia, Qihao [1 ]
Yu, Chao [1 ,2 ,3 ]
Hou, Yinong [1 ]
Peng, Pingping [1 ]
Zheng, Zhengqi [1 ,2 ]
Chen, Wen [1 ,2 ,3 ]
机构
[1] East China Normal Univ, Engn Ctr SHMEC Space Informat & GNSS, Shanghai 200241, Peoples R China
[2] East China Normal Univ, Shanghai Key Lab Multidimens Informat Proc, Shanghai 200241, Peoples R China
[3] East China Normal Univ, Key Lab Geog Informat Sci, Minist Educ, Shanghai 200241, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-modal alignment; multi-hop attention; visual question answering; feature fusion; SIGMOID FUNCTION; MODEL;
D O I
10.3390/electronics11111778
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The alignment of information between the image and the question is of great significance in the visual question answering (VQA) task. Self-attention is commonly used to generate attention weights between image and question. These attention weights can align two modalities. Through the attention weight, the model can select the relevant area of the image to align with the question. However, when using the self-attention mechanism, the attention weight between two objects is only determined by the representation of these two objects. It ignores the influence of other objects around these two objects. This contribution proposes a novel multi-hop attention alignment method that enriches surrounding information when using self-attention to align two modalities. Simultaneously, in order to utilize position information in alignment, we also propose a position embedding mechanism. The position embedding mechanism extracts the position information of each object and implements the position embedding mechanism to align the question word with the correct position in the image. According to the experiment on the VQA2.0 dataset, our model achieves validation accuracy of 65.77%, outperforming several state-of-the-art methods. The experimental result shows that our proposed methods have better performance and effectiveness.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Multi-Hop Reasoning for Question Answering with Knowledge Graph
    Zhang, Jiayuan
    Cai, Yifei
    Zhang, Qian
    Cao, Zehao
    Cheng, Zhenrong
    Li, Dongmei
    Meng, Xianghao
    [J]. 2021 IEEE/ACIS 20TH INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE (ICIS 2021-SUMMER), 2021, : 121 - 125
  • [32] Commonsense for Generative Multi-Hop Question Answering Tasks
    Bauer, Lisa
    Wang, Yicheng
    Bansal, Mohit
    [J]. 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 4220 - 4230
  • [33] Multi-hop community question answering based on multi-aspect heterogeneous graph
    Wu, Yongliang
    Yin, Hu
    Zhou, Qianqian
    Liu, Dongbo
    Wei, Dan
    Dong, Jiahao
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (01)
  • [34] Constraint-based Multi-hop Question Answering with Knowledge Graph
    Mitra, Sayantan
    Ramnani, Roshni
    Sengupta, Shubhashis
    [J]. 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, NAACL-HLT 2022, 2022, : 280 - 288
  • [35] Multi-level, multi-modal interactions for visual question answering over text in images
    Chen, Jincai
    Zhang, Sheng
    Zeng, Jiangfeng
    Zou, Fuhao
    Li, Yuan-Fang
    Liu, Tao
    Lu, Ping
    [J]. World Wide Web, 2022, 25 (04) : 1607 - 1623
  • [36] Multi-level, multi-modal interactions for visual question answering over text in images
    Jincai Chen
    Sheng Zhang
    Jiangfeng Zeng
    Fuhao Zou
    Yuan-Fang Li
    Tao Liu
    Ping Lu
    [J]. World Wide Web, 2022, 25 : 1607 - 1623
  • [37] Multi-level, multi-modal interactions for visual question answering over text in images
    Chen, Jincai
    Zhang, Sheng
    Zeng, Jiangfeng
    Zou, Fuhao
    Li, Yuan-Fang
    Liu, Tao
    Lu, Ping
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2022, 25 (04): : 1607 - 1623
  • [38] Open-Ended Visual Question Answering by Multi-Modal Domain Adaptation
    Xu, Yiming
    Chen, Lin
    Cheng, Zhongwei
    Duan, Lixin
    Luo, Jiebo
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 367 - 376
  • [39] Knowledge-Enhanced Visual Question Answering with Multi-modal Joint Guidance
    Wang, Jianfeng
    Zhang, Anda
    Du, Huifang
    Wang, Haofen
    Zhang, Wenqiang
    [J]. PROCEEDINGS OF THE 11TH INTERNATIONAL JOINT CONFERENCE ON KNOWLEDGE GRAPHS, IJCKG 2022, 2022, : 115 - 120
  • [40] Multi-modal Contextual Graph Neural Network for Text Visual Question Answering
    Liang, Yaoyuan
    Wang, Xin
    Duan, Xuguang
    Zhu, Wenwu
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 3491 - 3498