A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models

被引:0
|
作者
Song, Yuan-Feng [1 ]
He, Yuan-Qin [1 ]
Zhao, Xue-Fang [1 ]
Gu, Han-Lin [1 ]
Jiang, Di [1 ]
Yang, Hai-Jun [1 ]
Fan, Li-Xin [1 ]
机构
[1] AI Group, WeBank Co., Ltd, Shenzhen,518000, China
关键词
Contrastive Learning - Modeling languages - Natural language processing systems - Self-supervised learning - Semi-supervised learning - Zero-shot learning;
D O I
10.1007/s11390-024-4058-8
中图分类号
学科分类号
摘要
The springing up of large language models (LLMs) has shifted the community from single-task-orientated natural language processing (NLP) research to a holistic end-to-end multi-task learning paradigm. Along this line of research endeavors in the area, LLM-based prompting methods have attracted much attention, partially due to the technological advantages brought by prompt engineering (PE) as well as the underlying NLP principles disclosed by various prompting methods. Traditional supervised learning usually requires training a model based on labeled data and then making predictions. In contrast, PE methods directly use the powerful capabilities of existing LLMs (e.g., GPT-3 and GPT-4) via composing appropriate prompts, especially under few-shot or zero-shot scenarios. Facing the abundance of studies related to the prompting and the ever-evolving nature of this field, this article aims to 1) illustrate a novel perspective to review existing PE methods within the well-established communication theory framework, 2) facilitate a better/deeper understanding of developing trends of existing PE methods used in three typical tasks, and 3) shed light on promising research directions for future PE methods. © Institute of Computing Technology, Chinese Academy of Sciences 2024.
引用
收藏
页码:984 / 1004
页数:20
相关论文
共 50 条
  • [31] CoLE: A collaborative legal expert prompting framework for large language models in law
    Li, Bo
    Fan, Shuang
    Zhu, Shaolin
    Wen, Lijie
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [32] Who Wrote it and Why? Prompting Large-Language Models for Authorship Verification
    Hung, Chia-Yu
    Hu, Zhiqiang
    Hu, Yujia
    Lee, Roy Ka-Wei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 14078 - 14084
  • [33] Instructing and Prompting Large Language Models for Explainable Cross-domain Recommendations
    Petruzzelli, Alessandro
    Musto, Cataldo
    Laraspata, Lucrezia
    Rinaldi, Ivan
    de Gemmis, Marco
    Lops, Pasquale
    Semeraro, Giovanni
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 298 - 308
  • [34] MEEP: Is this Engaging? Prompting Large Language Models for Dialogue Evaluation in Multilingual Settings
    Ferron, Amila
    Shore, Amber
    Mitra, Ekata
    Agrawal, Ameeta
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 2078 - 2100
  • [35] Fairness-guided Few-shot Prompting for Large Language Models
    Ma, Huan
    Zhang, Changqing
    Bian, Yatao
    Liu, Lemao
    Zhang, Zhirui
    Zhao, Peilin
    Zhang, Shu
    Fu, Huazhu
    Hu, Qinghua
    Wu, Bingzhe
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [36] INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair
    Wang, Hanbin
    Liu, Zhenghao
    Wang, Shuo
    Cui, Ganqu
    Ding, Ning
    Liu, Zhiyuan
    Yu, Ge
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 2081 - 2107
  • [37] MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models
    Wen, Yilin
    Wang, Zifeng
    Sun, Jimeng
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 10370 - 10388
  • [38] Prompting Language Models for Linguistic Structure
    Blevins, Terra
    Gonen, Hila
    Zettlemoyer, Luke
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 6649 - 6663
  • [39] On Political Theory and Large Language Models
    Rodman, Emma
    POLITICAL THEORY, 2024, 52 (04) : 548 - 580
  • [40] Perspective: Large Language Models in Applied Mechanics
    Brodnik, Neal R.
    Carton, Samuel
    Muir, Caelin
    Ghosh, Satanu
    Downey, Doug
    Echlin, McLean P.
    Pollock, Tresa M.
    Daly, Samantha
    JOURNAL OF APPLIED MECHANICS-TRANSACTIONS OF THE ASME, 2023, 90 (10):