A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models

被引:0
|
作者
Song, Yuan-Feng [1 ]
He, Yuan-Qin [1 ]
Zhao, Xue-Fang [1 ]
Gu, Han-Lin [1 ]
Jiang, Di [1 ]
Yang, Hai-Jun [1 ]
Fan, Li-Xin [1 ]
机构
[1] AI Group, WeBank Co., Ltd, Shenzhen,518000, China
关键词
Contrastive Learning - Modeling languages - Natural language processing systems - Self-supervised learning - Semi-supervised learning - Zero-shot learning;
D O I
10.1007/s11390-024-4058-8
中图分类号
学科分类号
摘要
The springing up of large language models (LLMs) has shifted the community from single-task-orientated natural language processing (NLP) research to a holistic end-to-end multi-task learning paradigm. Along this line of research endeavors in the area, LLM-based prompting methods have attracted much attention, partially due to the technological advantages brought by prompt engineering (PE) as well as the underlying NLP principles disclosed by various prompting methods. Traditional supervised learning usually requires training a model based on labeled data and then making predictions. In contrast, PE methods directly use the powerful capabilities of existing LLMs (e.g., GPT-3 and GPT-4) via composing appropriate prompts, especially under few-shot or zero-shot scenarios. Facing the abundance of studies related to the prompting and the ever-evolving nature of this field, this article aims to 1) illustrate a novel perspective to review existing PE methods within the well-established communication theory framework, 2) facilitate a better/deeper understanding of developing trends of existing PE methods used in three typical tasks, and 3) shed light on promising research directions for future PE methods. © Institute of Computing Technology, Chinese Academy of Sciences 2024.
引用
收藏
页码:984 / 1004
相关论文
共 50 条
  • [41] Prompting for repair as a language teaching strategy for augmentative and alternative communication
    Cooper, Brittney
    Soto, Gloria
    Clarke, Michael T.
    [J]. AUGMENTATIVE AND ALTERNATIVE COMMUNICATION, 2021, 37 (04) : 251 - 260
  • [42] Integrating Graphs With Large Language Models: Methods and Prospects
    Pan, Shirui
    Zheng, Yizhen
    Liu, Yixin
    Murugesan, San
    [J]. IEEE INTELLIGENT SYSTEMS, 2024, 39 (01) : 64 - 68
  • [43] Adaptation of Enterprise Modeling Methods for Large Language Models
    Barn, Balbir S.
    Barat, Souvik
    Sandkuhl, Kurt
    [J]. PRACTICE OF ENTERPRISE MODELING, POEM 2023, 2024, 497 : 3 - 18
  • [44] Exploring Chinese EFL learners' engagement with large language models: A self-determination theory perspective
    Wang, Xiaochen
    Wang, Siyi
    [J]. LEARNING AND MOTIVATION, 2024, 87
  • [45] Testing theory of mind in large language models and humans
    Strachan, James W. A.
    Albergo, Dalila
    Borghini, Giulia
    Pansardi, Oriana
    Scaliti, Eugenio
    Gupta, Saurabh
    Saxena, Krati
    Rufo, Alessandro
    Panzeri, Stefano
    Manzi, Guido
    Graziano, Michael S. A.
    Becchio, Cristina
    [J]. NATURE HUMAN BEHAVIOUR, 2024, 8 (07): : 1285 - 1295
  • [46] GRIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models
    Prasad, Archiki
    Hase, Peter
    Zhou, Xiang
    Bansal, Mohit
    [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 3845 - 3864
  • [47] Enabling controllable table-to-text generation via prompting large language models with guided planning
    Zhao, Shuo
    Sun, Xin
    [J]. Knowledge-Based Systems, 2024, 304
  • [48] Prompting Large Language Models with Knowledge-Injection for Knowledge-Based Visual Question Answering
    Hu, Zhongjian
    Yang, Peng
    Liu, Fengyuan
    Meng, Yuan
    Liu, Xingyu
    [J]. BIG DATA MINING AND ANALYTICS, 2024, 7 (03): : 843 - 857
  • [49] Large Language Models for Software Engineering: A Systematic Literature Review
    Hou, Xinyi
    Zhao, Yanjie
    Liu, Yue
    Yang, Zhou
    Wang, Kailong
    Li, Li
    Luo, Xiapu
    Lo, David
    Grundy, John
    Wang, Haoyu
    [J]. ACM Transactions on Software Engineering and Methodology, 2024, 33 (08)
  • [50] Large Language Models for Software Engineering: Survey and Open Problems
    Fan, Angela
    Gokkaya, Beliz
    Harman, Mark
    Lyubarskiy, Mitya
    Sengupta, Shubho
    Yoo, Shin
    Zhang, Jie M.
    [J]. 2023 IEEE/ACM INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: FUTURE OF SOFTWARE ENGINEERING, ICSE-FOSE, 2023, : 31 - 53