Assessing the Impact of Prompt Strategies on Text Summarization with Large Language Models

被引:0
|
作者
Onan, Aytug [1 ]
Alhumyani, Hesham [2 ]
机构
[1] Izmir Katip Celebi Univ, Fac Engn & Architecture, Dept Comp Engn, TR-35620 Izmir, Turkiye
[2] Taif Univ, Coll Comp & Informat Technol, Dept Comp Engn, POB 11099, Taif 21944, Saudi Arabia
关键词
Large Language Models; Text Summarization; Prompt Strategies; Zero-shot Learning; One-shot Learning; Few-shot Learning; ROUGE; BLEU; BERTScore;
D O I
10.1007/978-3-031-76273-4_4
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The advent of large language models (LLMs) has significantly advanced the field of text summarization, enabling the generation of coherent and contextually accurate summaries. This paper introduces a comprehensive framework for evaluating the performance of state-of-the-art LLMs in text summarization, with a particular focus on the impact of various prompt strategies, including zero-shot, one-shot, and few-shot learning. Our framework systematically examines how these prompting techniques influence summarization quality across diverse datasets, namely CNN/Daily Mail, XSum, TAC08, and TAC09. To provide a robust evaluation, we employ a range of intrinsic metrics such as ROUGE, BLEU, and BERTScore. These metrics allow us to quantify the quality of the generated summaries in terms of precision, recall, and semantic similarity. We evaluated three prominent LLMs: GPT-3, GPT-4, and LLaMA, each configured to optimize summarization performance under different prompting strategies. Our results reveal significant variations in performance depending on the chosen prompting strategy, highlighting the strengths and limitations of each approach. Furthermore, this study provides insights into the optimal conditions for employing different prompt strategies, offering practical guidelines for researchers and practitioners aiming to leverage LLMs for text summarization tasks. By delivering a thorough comparative analysis, we contribute to the understanding of how to maximize the potential of LLMs in generating high-quality summaries, ultimately advancing the field of natural language processing.
引用
收藏
页码:41 / 55
页数:15
相关论文
共 50 条
  • [1] The Effect of Prompt Types on Text Summarization Performance With Large Language Models
    Borhan, Iffat
    Bajaj, Akhilesh
    JOURNAL OF DATABASE MANAGEMENT, 2024, 35 (01)
  • [2] Enhancing Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies
    Nan, Linyong
    Zhao, Yilun
    Zhou, Weijin
    Rigi, Narutatsu
    Tae, Jaesung
    Zhang, Ellen
    Cohan, Arman
    Radev, Dragomir
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 14935 - 14956
  • [3] Text Summarization in Aviation Safety: A Comparative Study of Large Language Models
    Emmons, Jonathan
    Sharma, Taneesha
    Salloum, Mariam
    Matthews, Bryan
    AIAA AVIATION FORUM AND ASCEND 2024, 2024,
  • [4] Method for judicial document summarization by combining prompt learning and Qwen large language models
    Li, Jiayi
    Huang, Ruizhang
    Chen, Yanping
    Lin, Chuan
    Qin, Yongbin
    Qinghua Daxue Xuebao/Journal of Tsinghua University, 2024, 64 (12): : 2007 - 2018
  • [5] Prompt text classifications with transformer models! An exemplary introduction to prompt-based learning with large language models
    Mayer, Christian W. F.
    Ludwig, Sabrina
    Brandt, Steffen
    JOURNAL OF RESEARCH ON TECHNOLOGY IN EDUCATION, 2023, 55 (01) : 125 - 141
  • [6] Implications of Large Language Models for OSINT: Assessing the Impact on Information Acquisition and Analyst Expertise in Prompt Engineering
    Cerny, Jan
    PROCEEDINGS OF THE 23RD EUROPEAN CONFERENCE ON CYBER WARFARE AND SECURITY, ECCWS 2024, 2024, 23 : 116 - 124
  • [7] Extractive Text Summarization Models for Urdu Language
    Nawaz, Ali
    Bakhtyar, Maheen
    Baber, Junaid
    Ullah, Ihsan
    Noor, Waheed
    Basit, Abdul
    INFORMATION PROCESSING & MANAGEMENT, 2020, 57 (06)
  • [8] Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints
    Lu, Albert
    Zhang, Hongxin
    Zhang, Yanzhe
    Wang, Xuezhi
    Yang, Diyi
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1982 - 2008
  • [9] Investigating the Impact of Prompt Engineering on the Performance of Large Language Models for Standardizing Obstetric Diagnosis Text: Comparative Study
    Wang, Lei
    Bi, Wenshuai
    Zhao, Suling
    Ma, Yinyao
    Lv, Longting
    Meng, Chenwei
    Fu, Jingru
    Lv, Hanlin
    JMIR FORMATIVE RESEARCH, 2024, 8
  • [10] Prompt Optimization in Large Language Models
    Sabbatella, Antonio
    Ponti, Andrea
    Giordani, Ilaria
    Candelieri, Antonio
    Archetti, Francesco
    MATHEMATICS, 2024, 12 (06)