Dual self-attention with co-attention networks for visual question answering

被引:40
|
作者
Liu, Yun [1 ,2 ]
Zhang, Xiaoming [3 ]
Zhang, Qianyun [3 ]
Li, Chaozhuo [4 ]
Huang, Feiran [5 ]
Tang, Xianghong [6 ]
Li, Zhoujun [2 ]
机构
[1] Beijing Informat Sci & Technol Univ, Beijing Key Lab Internet Culture & Digital Dissem, Beijing, Peoples R China
[2] Beihang Univ, Sch Comp Sci & Engn, State Key Lab Software Dev Environm, Beijing, Peoples R China
[3] Beihang Univ, Sch Cyber Sci & Technol, Beijing, Peoples R China
[4] Microsoft Res Asia, Beijing, Peoples R China
[5] Jinan Univ, Coll Informat Sci & Technol, Coll Cyber Secur, Guangzhou, Peoples R China
[6] Guizhou Univ, Key Lab Adv Mfg Technol, Minist Educ, Guiyang, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Self-attention; Visual-textual co-attention; Visual question answering;
D O I
10.1016/j.patcog.2021.107956
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visual Question Answering (VQA) as an important task in understanding vision and language has been proposed and aroused wide interests. In previous VQA methods, Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are generally used to extract visual and textual features respectively, and then the correlation between these two features is explored to infer the answer. However, CNN mainly focuses on extracting local spatial information and RNN pays more attention on exploiting sequential architecture and long-range dependencies. It is difficult for them to integrate the local features with their global dependencies to learn more effective representations of the image and question. To address this problem, we propose a novel model, i.e., Dual Self-Attention with Co-Attention networks (DSACA), for VQA. It aims to model the internal dependencies of both the spatial and sequential structure respectively by using the newly proposed self-attention mechanism. Specifically, DSACA mainly contains three sub modules. The visual self-attention module selectively aggregates the visual features at each region by a weighted sum of the features at all positions. The textual self-attention module automatically emphasizes the interdependent word features by integrating associated features among the sentence words. Besides, the visual-textual co-attention module explores the close correlation between visual and textual features learned from self-attention modules. The three modules are integrated into an end-to-end framework to infer the answer. Extensive experiments performed on three generally used VQA datasets confirm the favorable performance of DSACA compared with state-of-the-art methods. 0 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Generalized pyramid co-attention with learnable aggregation net for video question answering
    Gao, Lianli
    Chen, Tangming
    Li, Xiangpeng
    Zeng, Pengpeng
    Zhao, Lei
    Li, Yuan-Fang
    [J]. PATTERN RECOGNITION, 2021, 120
  • [42] Dual-feature collaborative relation-attention networks for visual question answering
    Yao, Lu
    Yang, You
    Hu, Juntao
    [J]. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2023, 12 (02)
  • [43] DRAU: Dual Recurrent Attention Units for Visual Question Answering
    Osman, Ahmed
    Samek, Wojciech
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2019, 185 : 24 - 30
  • [44] Dual-feature collaborative relation-attention networks for visual question answering
    Lu Yao
    You Yang
    Juntao Hu
    [J]. International Journal of Multimedia Information Retrieval, 2023, 12
  • [45] Visual question answering model based on the fusion of multimodal features by a two-wav co-attention mechanism
    Sharma, Himanshu
    Srivastava, Swati
    [J]. IMAGING SCIENCE JOURNAL, 2021, 69 (1-4): : 177 - 189
  • [46] An Improved Attention for Visual Question Answering
    Rahman, Tanzila
    Chou, Shih-Han
    Sigal, Leonid
    Carenini, Giuseppe
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 1653 - 1662
  • [47] Multi-level Attention Networks for Visual Question Answering
    Yu, Dongfei
    Fu, Jianlong
    Mei, Tao
    Rui, Yong
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4187 - 4195
  • [48] Differential Attention for Visual Question Answering
    Patro, Badri
    Namboodiri, Vinay P.
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7680 - 7688
  • [49] Multimodal Attention for Visual Question Answering
    Kodra, Lorena
    Mece, Elinda Kajo
    [J]. INTELLIGENT COMPUTING, VOL 1, 2019, 858 : 783 - 792
  • [50] Regularizing Attention Networks for Anomaly Detection in Visual Question Answering
    Lee, Doyup
    Cheon, Yeongjae
    Han, Wook-Shin
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 1845 - 1853