Guided Neural Language Generation for Abstractive Summarization using Abstract Meaning Representation

被引:0
|
作者
Hardy [1 ]
Vlachos, Andreas [1 ]
机构
[1] Univ Sheffield, Sheffield, S Yorkshire, England
基金
欧盟地平线“2020”; 英国工程与自然科学研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent work on abstractive summarization has made progress with neural encoder-decoder architectures. However, such models are often challenged due to their lack of explicit semantic modeling of the source document and its summary. In this paper, we extend previous work on abstractive summarization using Abstract Meaning Representation (AMR) with a neural language generation stage which we guide using the source document. We demonstrate that this guidance improves summarization results by 7.4 and 10.5 points in ROUGE-2 using gold standard AMR parses and parses obtained from an off-the-shelf parser respectively. We also find that the summarization performance using the latter is 2 ROUGE-2 points higher than that of a well-established neural encoderdecoder approach trained on a larger dataset.
引用
收藏
页码:768 / 773
页数:6
相关论文
共 50 条
  • [1] SemSUM: Semantic Dependency Guided Neural Abstractive Summarization
    Jin, Hanqi
    Wang, Tianming
    Wan, Xiaojun
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 8026 - 8033
  • [2] GSum: A General Framework for Guided Neural Abstractive Summarization
    Dou, Zi-Yi
    Liu, Pengfei
    Hayashi, Hiroaki
    Jiang, Zhengbao
    Neubig, Graham
    [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 4830 - 4842
  • [3] Keyphrase Guided Beam Search for Neural Abstractive Text Summarization
    Chen, Xuewen
    Li, Jinlong
    Wang, Haihan
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [4] On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
    Pilault, Jonathan
    Li, Raymond
    Subramanian, Sandeep
    Pal, Christopher
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 9308 - 9319
  • [5] Towards Multilingual Natural Language Generation Within Abstractive Summarization
    Mille, Simon
    Ballesteros, Miguel
    Burga, Alicia
    Casamayor, Gerard
    Wanner, Leo
    [J]. ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2016, 288 : 309 - 314
  • [6] Generation from abstract meaning representation using tree transducers
    Flanigan, Jeffrey
    Dyer, Chris
    Smith, Noah A.
    Carbonell, Jaime
    [J]. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference, 2016, : 731 - 739
  • [7] Rating-boosted abstractive review summarization with neural personalized generation
    Xu, Hongyan
    Liu, Hongtao
    Zhang, Wang
    Jiao, Pengfei
    Wang, Wenjun
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 218
  • [8] Towards Timeline Generation with Abstract Meaning Representation
    Mansouri, Behrooz
    Campos, Ricardo
    Jatowt, Adam
    [J]. COMPANION OF THE WORLD WIDE WEB CONFERENCE, WWW 2023, 2023, : 1204 - 1207
  • [9] Abstract Meaning Representation Parsing using LSTM Recurrent Neural Networks
    Foland, William R., Jr.
    Martin, James H.
    [J]. PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 463 - 472
  • [10] Controlling Length in Abstractive Summarization Using a Convolutional Neural Network
    Liu, Yizhu
    Luo, Zhiyi
    Zhu, Kenny Q.
    [J]. 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 4110 - 4119