Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection

被引:0
|
作者
Chen, Sihao [1 ]
Zhang, Fan [2 ]
Sone, Kazoo [2 ]
Roth, Dan [1 ]
机构
[1] Univ Penn, Philadelphia, PA USA
[2] Google, Atlanta, GA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.
引用
收藏
页码:5935 / 5941
页数:7
相关论文
共 50 条
  • [31] Attention Head Masking for Inference Time Content Selection in Abstractive Summarization
    Cao, Shuyang
    Wang, Lu
    [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 5008 - 5016
  • [32] Abstractive Multi-Document Summarization via Phrase Selection and Merging
    Bing, Lidong
    Li, Piji
    Liao, Yi
    Lam, Wai
    Guo, Weiwei
    Passonneau, Rebecca J.
    [J]. PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1, 2015, : 1587 - 1597
  • [33] Improving named entity correctness of abstractive summarization by generative negative sampling
    Chen, Zheng
    Lin, Hongyu
    [J]. COMPUTER SPEECH AND LANGUAGE, 2023, 81
  • [34] Improving Abstractive Summarization by Training Masked Out-of-Vocabulary Words
    Lee, Tae-Seok
    Lee, Hyun-Young
    Kang, Seung-Shik
    [J]. JOURNAL OF INFORMATION PROCESSING SYSTEMS, 2022, 18 (03): : 344 - 358
  • [35] Improving abstractive summarization based on dynamic residual network with reinforce dependency
    Liao, Weizhi
    Ma, Yaheng
    Yin, Yanchao
    Ye, Guanglei
    Zuo, Dongzhou
    [J]. NEUROCOMPUTING, 2021, 448 : 228 - 237
  • [36] Rating-boosted abstractive review summarization with neural personalized generation
    Xu, Hongyan
    Liu, Hongtao
    Zhang, Wang
    Jiao, Pengfei
    Wang, Wenjun
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 218
  • [37] Improving Pointer-Generator Network with Keywords Information for Chinese Abstractive Summarization
    Jiang, Xiaoping
    Hu, Po
    Hou, Liwei
    Wang, Xia
    [J]. NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT I, 2018, 11108 : 464 - 474
  • [38] Candidate sentence selection for extractive text summarization
    Mutlu, Begum
    Sezer, Ebru A.
    Akcayol, M. Ali
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2020, 57 (06)
  • [39] Improving colloquial case legal judgment prediction via abstractive text summarization
    Hong, Yu-Xiang
    Chang, Chia-Hui
    [J]. COMPUTER LAW & SECURITY REVIEW, 2023, 51
  • [40] Topic level summary generation using BERT induced Abstractive Summarization Model
    Ramina, Mayank
    Darnay, Nihar
    Ludbe, Chirag
    Dhruv, Ajay
    [J]. PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND CONTROL SYSTEMS (ICICCS 2020), 2020, : 747 - 752