Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection

被引:0
|
作者
Chen, Sihao [1 ]
Zhang, Fan [2 ]
Sone, Kazoo [2 ]
Roth, Dan [1 ]
机构
[1] Univ Penn, Philadelphia, PA USA
[2] Google, Atlanta, GA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.
引用
收藏
页码:5935 / 5941
页数:7
相关论文
共 50 条
  • [1] eTowards Improving Faithfulness in Abstractive Summarization
    Chen, Xiuying
    Li, Mingzhe
    Gao, Xin
    Zhang, Xiangliang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [2] On Faithfulness and Factuality in Abstractive Summarization
    Maynez, Joshua
    Narayan, Shashi
    Bohnet, Bernd
    McDonald, Ryan
    [J]. 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 1906 - 1919
  • [3] CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization
    Cao, Shuyang
    Wang, Lu
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 6633 - 6649
  • [4] Improving Faithfulness in Abstractive Text Summarization with EDUs Using BART (Student Abstract)
    Delpisheh, Narjes
    Chali, Yllias
    [J]. THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23471 - 23472
  • [5] Faithfulness-Aware Decoding Strategies for Abstractive Summarization
    Wan, David
    Liu, Mengwen
    McKeown, Kathleen
    Dreyer, Markus
    Bansal, Mohit
    [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2864 - 2880
  • [6] Improving Neural Abstractive Document Summarization with Explicit Information Selection Modeling
    Li, Wei
    Xiao, Xinyan
    Lyu, Yajuan
    Wang, Yuanzhuo
    [J]. 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 1787 - 1796
  • [7] Improving Abstractive Summarization with Iterative Representation
    Li, Jinpeng
    Zhang, Chuang
    Chen, Xiaojun
    Cao, Yanan
    Jia, Ruipeng
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [8] Joint Parsing and Generation for Abstractive Summarization
    Song, Kaiqiang
    Lebanoff, Logan
    Guo, Qipeng
    Qiu, Xipeng
    Xue, Xiangyang
    Li, Chen
    Yu, Dong
    Liu, Fei
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 8894 - 8901
  • [9] Exploring Explainable Selection to Control Abstractive Summarization
    Wang Haonan
    Gao Yang
    Bai Yu
    Lapata, Mirella
    Huang Heyan
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 13933 - 13941
  • [10] Improving Abstractive Text Summarization with History Aggregation
    Liao, Pengcheng
    Zhang, Chuang
    Chen, Xiaojun
    Zhou, Xiaofei
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,