Abstractive Summarization with Keyword and Generated Word Attention

被引:0
|
作者
Wang, Qianlong [1 ]
Ren, Jiangtao [1 ]
机构
[1] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ijcnn.2019.8852444
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A Abstractive summarization is a important task in natural language processing field. In previous work, the sequence-to-sequence based models are widely used for abstractive summarization task. However, most of the current abstractive summarization models still suffer from two problems. One is that it is difficult for these models to learn an accurate source contextual representation from the redundancy and noisy source text at each decoding step. Another is the information loss problem, which is ignored in previous work. The inability of these models to effectively exploit previously generated words led to this problem. In order to address these two problems, in this paper, we propose a novel keyword and generated word attention model. Specifically, the proposed model first employs the hidden state of decoder to capture relevant keywords and previously generated words contextual at each time step. The model then utilizes obtained keywords and generated words contextual to create keywords-aware and generated words-aware source contextual, respectively. The keywords contextual contributes to learn an accurate source contextual representation, and the generated words contextual can alleviate the information loss problem. Experimental results on a popular Chinese social media dataset demonstrate that the proposed model outperforms baselines and achieves the state-of-the-art performance.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Neural attention model with keyword memory for abstractive document summarization
    Choi, YunSeok
    Kim, Dahae
    Lee, Jee-Hyong
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2020, 32 (18):
  • [2] Incorporating word attention with convolutional neural networks for abstractive summarization
    Yuan, Chengzhe
    Bao, Zhifeng
    Sanderson, Mark
    Tang, Yong
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2020, 23 (01): : 267 - 287
  • [3] Incorporating word attention with convolutional neural networks for abstractive summarization
    Chengzhe Yuan
    Zhifeng Bao
    Mark Sanderson
    Yong Tang
    [J]. World Wide Web, 2020, 23 : 267 - 287
  • [4] KAAS: A Keyword-Aware Attention Abstractive Summarization Model for Scientific Articles
    Li, Shuaimin
    Xu, Jungang
    [J]. DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT III, 2022, : 263 - 271
  • [5] Attention Optimization for Abstractive Document Summarization
    Gui, Min
    Tian, Junfeng
    Wang, Rui
    Yang, Zhenglu
    [J]. 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 1222 - 1228
  • [6] Neural Abstractive Summarization with Structural Attention
    Chowdhury, Tanya
    Kumar, Sachin
    Chakraborty, Tanmoy
    [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3716 - 3722
  • [7] Improving Abstractive Dialogue Summarization Using Keyword Extraction
    Yoo, Chongjae
    Lee, Hwanhee
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [8] Keyword-Aware Encoder for Abstractive Text Summarization
    Hu, Tianxiang
    Liang, Jingxi
    Ye, Wei
    Zhang, Shikun
    [J]. DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2021), PT II, 2021, 12682 : 37 - 52
  • [9] Highlighted Word Encoding for Abstractive Text Summarization
    Lal, Daisy Monika
    Singh, Krishna Pratap
    Tiwary, Uma Shanker
    [J]. INTELLIGENT HUMAN COMPUTER INTERACTION (IHCI 2019), 2020, 11886 : 77 - 86
  • [10] Attention based Abstractive Summarization of Malayalam Document
    Nambiar, Sindhya K.
    Peter, David S.
    Idicula, Sumam Mary
    [J]. AI IN COMPUTATIONAL LINGUISTICS, 2021, 189 : 250 - 257