Improving abstractive summarization of legal rulings through textual entailment

被引:0
|
作者
Diego de Vargas Feijo
Viviane P. Moreira
机构
[1] Universidade Federal do Rio Grande do Sul,Institute of informatics
来源
关键词
Legal ruling summarization; Abstractive summarizer; Content digest; Legal case brief; Summary writing; Abstract generator; Automatic text summary; Textual entailment; Fact checking;
D O I
暂无
中图分类号
学科分类号
摘要
The standard approach for abstractive text summarization is to use an encoder-decoder architecture. The encoder is responsible for capturing the general meaning from the source text, and the decoder is in charge of generating the final text summary. While this approach can compose summaries that resemble human writing, some may contain unrelated or unfaithful information. This problem is called “hallucination” and it represents a serious issue in legal texts as legal practitioners rely on these summaries when looking for precedents, used to support legal arguments. Another concern is that legal documents tend to be very long and may not be fed entirely to the encoder. We propose our method called LegalSumm for addressing these issues by creating different “views” over the source text, training summarization models to generate independent versions of summaries, and applying entailment module to judge how faithful these candidate summaries are with respect to the source text. We show that the proposed approach can select candidate summaries that improve ROUGE scores in all metrics evaluated.
引用
收藏
页码:91 / 113
页数:22
相关论文
共 50 条
  • [21] Improving Abstractive Dialogue Summarization Using Keyword Extraction
    Yoo, Chongjae
    Lee, Hwanhee
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [22] Extractive Elementary Discourse Units for Improving Abstractive Summarization
    Xiong, Ye
    Racharak, Teeradaj
    Minh Le Nguyen
    [J]. PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 2675 - 2679
  • [23] Improving Factual Consistency of Abstractive Summarization on Customer Feedback
    Liu, Yang
    Sun, Yifei
    Gao, Vincent
    [J]. ECNLP 4: THE FOURTH WORKSHOP ON E-COMMERCE AND NLP, 2021, : 158 - 163
  • [24] Applying BERT Embeddings to Predict Legal Textual Entailment
    Sabine Wehnert
    Shipra Dureja
    Libin Kutty
    Viju Sudhi
    Ernesto William De Luca
    [J]. The Review of Socionetwork Strategies, 2022, 16 : 197 - 219
  • [25] Improving the precision of natural textual entailment problem datasets
    Bernardy, Jean-Philippe
    Chatzikyriakidis, Stergios
    [J]. PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), 2020, : 6835 - 6840
  • [26] Improving Question Answering tasks by Textual Entailment recognition
    Ferrandez, Oscar
    Munoz, Rafael
    Palomar, Manuel
    [J]. NATURAL LANGUAGE AND INFORMATION SYSTEMS, PROCEEDINGS, 2008, 5039 : 339 - 340
  • [27] WHORU: Improving Abstractive Dialogue Summarization with Personal Pronoun Resolution
    Zhou, Tingting
    [J]. ELECTRONICS, 2023, 12 (14)
  • [28] Improving Transformer with Sequential Context Representations for Abstractive Text Summarization
    Cai, Tian
    Shen, Mengjun
    Peng, Huailiang
    Jiang, Lei
    Dai, Qiong
    [J]. NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING (NLPCC 2019), PT I, 2019, 11838 : 512 - 524
  • [29] Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection
    Chen, Sihao
    Zhang, Fan
    Sone, Kazoo
    Roth, Dan
    [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 5935 - 5941
  • [30] CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization
    Cao, Shuyang
    Wang, Lu
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 6633 - 6649