Automatic Text Classification With Large Language Models: A Review of <monospace>openai</monospace> for Zero- and Few-Shot Classification

被引:0
|
作者
Anglin, Kylie L. [1 ]
Ventura, Claudia [1 ]
机构
[1] Univ Connecticut, Storrs, CT 06269 USA
关键词
large language models; LLMs; artificial intelligence; <monospace>openai</monospace>; educational measurement;
D O I
10.3102/10769986241279927
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
While natural language documents, such as intervention transcripts and participant writing samples, can provide highly nuanced insights into educational and psychological constructs, researchers often find these materials difficult and expensive to analyze. Recent developments in machine learning, however, have allowed social scientists to harness the power of artificial intelligence for complex data categorization tasks. One approach, supervised learning, supports high-performance categorization yet still requires a large, hand-labeled training corpus, which can be costly. An alternative approach-zero- and few-shot classification with pretrained large language models-offers a cheaper, compelling alternative. This article considers the application of zero-shot and few-shot classification in educational research. We provide an overview of large language models, a step-by-step tutorial on using the Python openai package for zero-shot and few-shot classification, and a discussion of relevant research considerations for social scientists.<br />
引用
收藏
页数:23
相关论文
共 50 条
  • [1] WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation
    Jeong, Jongheon
    Zou, Yang
    Kim, Taewan
    Zhang, Dongqing
    Ravichandran, Avinash
    Dabeer, Onkar
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 19606 - 19616
  • [2] TabLLM: Few-shot Classification of Tabular Data with Large Language Models
    Hegselmann, Stefan
    Buendia, Alejandro
    Lang, Hunter
    Agrawal, Monica
    Jiang, Xiaoyi
    Sontag, David
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [3] A review of few-shot classification
    Lim, Jia Min
    Lim, Kian Ming
    Lee, Chin Poo
    Lim, Jit Yan
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 275
  • [4] Large Language Models for Binary Health-Related Question Answering: A Zero- and Few-Shot Evaluation
    Fernandez-Pichel, Marcos
    Losada, David E.
    Pichel, Juan C.
    COMPUTATIONAL SCIENCE, ICCS 2024, PT IV, 2024, 14835 : 325 - 339
  • [5] Large Language Models for Few-Shot Automatic Term Extraction
    Banerjee, Shubhanker
    Chakravarthi, Bharathi Raja
    McCrae, John Philip
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS, PT I, NLDB 2024, 2024, 14762 : 137 - 150
  • [6] Noisy Channel Language Model Prompting for Few-Shot Text Classification
    Min, Sewon
    Lewis, Mike
    Hajishirzi, Hannaneh
    Zettlemoyer, Luke
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 5316 - 5330
  • [7] Causal representation for few-shot text classification
    Yang, Maoqin
    Zhang, Xuejie
    Wang, Jin
    Zhou, Xiaobing
    APPLIED INTELLIGENCE, 2023, 53 (18) : 21422 - 21432
  • [8] Adversarial training for few-shot text classification
    Croce, Danilo
    Castellucci, Giuseppe
    Basili, Roberto
    INTELLIGENZA ARTIFICIALE, 2020, 14 (02) : 201 - 214
  • [9] Few-shot learning for short text classification
    Yan, Leiming
    Zheng, Yuhui
    Cao, Jie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (22) : 29799 - 29810
  • [10] Few-shot learning for short text classification
    Leiming Yan
    Yuhui Zheng
    Jie Cao
    Multimedia Tools and Applications, 2018, 77 : 29799 - 29810