Enhancing Recurrent Neural Networks with Positional Attention for Question Answering

被引:61
|
作者
Chen, Qin [1 ]
Hu, Qinmin [1 ]
Huang, Jimmy Xiangji [2 ]
He, Liang [1 ,3 ]
An, Weijie [1 ]
机构
[1] East China Normal Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
[2] York Univ, Informat Retrieval & Knowledge Management Res Lab, Toronto, ON, Canada
[3] Shanghai Engn Res Ctr Intelligent Serv Robot, Shanghai, Peoples R China
基金
加拿大自然科学与工程研究理事会;
关键词
D O I
10.1145/3077136.3080699
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Attention based recurrent neural networks (RNN) have shown a great success for question answering (QA) in recent years. Although significant improvements have been achieved over the non-attentive models, the position information is not well studied within the attention-based framework. Motivated by the effectiveness of using the word positional context to enhance information retrieval, we assume that if a word in the question (i.e., question word) occurs in an answer sentence, the neighboring words should be given more attention since they intuitively contain more valuable information for question answering than those far away. Based on this assumption, we propose a positional attention based RNN model, which incorporates the positional context of the question words into the answers' attentive representations. Experiments on two benchmark datasets show the great advantages of our proposed model. Specifically, we achieve a maximum improvement of 8.83% over the classical attention based RNN model in terms of mean average precision. Furthermore, our model is comparable to if not better than the state-of-the-art approaches for question answering.
引用
收藏
页码:993 / 996
页数:4
相关论文
共 50 条
  • [1] BCA: Bilinear Convolutional Neural Networks and Attention Networks for legal question answering
    Zhang, Haiguang
    Zhang, Tongyue
    Cao, Faxin
    Wang, Zhizheng
    Zhang, Yuanyu
    Sun, Yuanyuan
    Vicente, Mark Anthony
    [J]. AI OPEN, 2022, 3 : 172 - 181
  • [2] Enhancing the Recurrent Neural Networks with Positional Gates for Sentence Representation
    Song, Yang
    Hu, Wenxin
    Chen, Qin
    Hu, Qinmin
    He, Liang
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2018), PT I, 2018, 11301 : 511 - 521
  • [3] Question Answering with Hierarchical Attention Networks
    Alpay, Tayfun
    Heinrich, Stefan
    Nelskamp, Michael
    Wermter, Stefan
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [4] Stacked Attention Networks for Image Question Answering
    Yang, Zichao
    He, Xiaodong
    Gao, Jianfeng
    Deng, Li
    Smola, Alex
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 21 - 29
  • [5] Enhancing Key-Value Memory Neural Networks for Knowledge Based Question Answering
    Xu, Kun
    Lai, Yuxuan
    Feng, Yansong
    Wang, Zhiguo
    [J]. 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 2937 - 2947
  • [6] Semantically Corroborating Neural Attention for Biomedical Question Answering
    Oita, Marilena
    Vani, K.
    Oezdemir-Zaech, Fatma
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 1168 : 670 - 685
  • [7] Ask Your TV: Real-Time Question Answering with Recurrent Neural Networks
    Ture, Ferhan
    Jojic, Oliver
    [J]. SIGIR'16: PROCEEDINGS OF THE 39TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2016, : 457 - 458
  • [8] Beyond RNNs: Positional Self-Attention with Co-Attention for Video Question Answering
    Li, Xiangpeng
    Song, Jingkuan
    Gao, Lianli
    Liu, Xianglong
    Huang, Wenbing
    He, Xiangnan
    Gan, Chuang
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 8658 - 8665
  • [9] Hierarchical Recurrent Contextual Attention Network for Video Question Answering
    Zhou, Fei
    Han, Yahong
    [J]. ARTIFICIAL INTELLIGENCE, CICAI 2022, PT II, 2022, 13605 : 280 - 290
  • [10] DRAU: Dual Recurrent Attention Units for Visual Question Answering
    Osman, Ahmed
    Samek, Wojciech
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2019, 185 : 24 - 30