Enhancing Recurrent Neural Networks with Positional Attention for Question Answering

被引:61
|
作者
Chen, Qin [1 ]
Hu, Qinmin [1 ]
Huang, Jimmy Xiangji [2 ]
He, Liang [1 ,3 ]
An, Weijie [1 ]
机构
[1] East China Normal Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
[2] York Univ, Informat Retrieval & Knowledge Management Res Lab, Toronto, ON, Canada
[3] Shanghai Engn Res Ctr Intelligent Serv Robot, Shanghai, Peoples R China
基金
加拿大自然科学与工程研究理事会;
关键词
D O I
10.1145/3077136.3080699
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Attention based recurrent neural networks (RNN) have shown a great success for question answering (QA) in recent years. Although significant improvements have been achieved over the non-attentive models, the position information is not well studied within the attention-based framework. Motivated by the effectiveness of using the word positional context to enhance information retrieval, we assume that if a word in the question (i.e., question word) occurs in an answer sentence, the neighboring words should be given more attention since they intuitively contain more valuable information for question answering than those far away. Based on this assumption, we propose a positional attention based RNN model, which incorporates the positional context of the question words into the answers' attentive representations. Experiments on two benchmark datasets show the great advantages of our proposed model. Specifically, we achieve a maximum improvement of 8.83% over the classical attention based RNN model in terms of mean average precision. Furthermore, our model is comparable to if not better than the state-of-the-art approaches for question answering.
引用
收藏
页码:993 / 996
页数:4
相关论文
共 50 条
  • [21] Graph neural networks for visual question answering: a systematic review
    Abdulganiyu Abdu Yusuf
    Chong Feng
    Xianling Mao
    Ramadhani Ally Duma
    Mohammed Salah Abood
    Abdulrahman Hamman Adama Chukkol
    [J]. Multimedia Tools and Applications, 2024, 83 : 55471 - 55508
  • [22] Recurrent convolutional neural network for answer selection in community question answering
    Zhou, Xiaoqiang
    Hu, Baotian
    Chen, Qingcai
    Wang, Xiaolong
    [J]. NEUROCOMPUTING, 2018, 274 : 8 - 18
  • [23] Graph neural networks for visual question answering: a systematic review
    Yusuf, Abdulganiyu Abdu
    Feng, Chong
    Mao, Xianling
    Ally Duma, Ramadhani
    Abood, Mohammed Salah
    Chukkol, Abdulrahman Hamman Adama
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (18) : 55471 - 55508
  • [24] Multimodal Encoder-Decoder Attention Networks for Visual Question Answering
    Chen, Chongqing
    Han, Dezhi
    Wang, Jun
    [J]. IEEE ACCESS, 2020, 8 : 35662 - 35671
  • [25] An Effective Dense Co-Attention Networks for Visual Question Answering
    He, Shirong
    Han, Dezhi
    [J]. SENSORS, 2020, 20 (17) : 1 - 15
  • [26] Convolutional neural networks for expert recommendation in community question answering
    Jian WANG
    Jiqing SUN
    Hongfei LIN
    Hualei DONG
    Shaowu ZHANG
    [J]. Science China(Information Sciences), 2017, 60 (11) : 19 - 27
  • [27] Hierarchical Attention Networks for Fact-based Visual Question Answering
    Yao, Haibo
    Luo, Yongkang
    Zhang, Zhi
    Yang, Jianhang
    Cai, Chengtao
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (06) : 17281 - 17298
  • [28] An Overview of Utilizing Knowledge Bases in Neural Networks for Question Answering
    Kafle, Sabin
    de Silva, Nisansa
    Dou, Dejing
    [J]. 2019 IEEE 20TH INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE (IRI 2019), 2019, : 326 - 333
  • [29] Deep Modular Co-Attention Networks for Visual Question Answering
    Yu, Zhou
    Yu, Jun
    Cui, Yuhao
    Tao, Dacheng
    Tian, Qi
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6274 - 6283
  • [30] Multimodal Cross-guided Attention Networks for Visual Question Answering
    Liu, Haibin
    Gong, Shengrong
    Ji, Yi
    Yang, Jianyu
    Xing, Tengfei
    Liu, Chunping
    [J]. PROCEEDINGS OF THE 2018 INTERNATIONAL CONFERENCE ON COMPUTER MODELING, SIMULATION AND ALGORITHM (CMSA 2018), 2018, 151 : 347 - 353