Small Language Models Improve Giants by Rewriting Their Outputs

被引:0
|
作者
Vernikos, Giorgos [1 ,2 ,4 ]
Brazinskas, Arthur [3 ]
Adamek, Jakub [3 ]
Mallinson, Jonathan [3 ]
Severyn, Aliaksei [3 ]
Malmi, Eric [3 ]
机构
[1] Ecole Polytech Fed Lausanne, Lausanne, Switzerland
[2] HEIG VD HES SO, Yverdon, Switzerland
[3] Google Res, Mountain View, CA USA
[4] Google, Mountain View, CA USA
基金
瑞士国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the impressive performance of large language models (LLMs), they often lag behind specialized models in various tasks. LLMs only use a fraction of the existing training data for in-context learning, while task-specific models harness the full dataset for fine-tuning. In this work, we tackle the problem of leveraging training data to improve the performance of LLMs without fine-tuning. Our approach directly targets LLM predictions without requiring access to their weights. We create a pool of candidates from the LLM through few-shot prompting and we employ a compact model, the LM-corrector (LMCOR), specifically trained to merge these candidates to produce an enhanced output. Our experiments on four natural language generation tasks demonstrate that even a small LMCOR model (250M) substantially improves the few-shot performance of LLMs (62B), matching and even outperforming standard fine-tuning. Furthermore, we illustrate the robustness of LMCOR against different prompts, thereby minimizing the need for extensive prompt engineering. Finally, we show that LMCOR can be seamlessly integrated with different LLMs at inference, serving as a plug-and-play module to improve their performance.
引用
收藏
页码:2703 / 2718
页数:16
相关论文
共 50 条
  • [31] The Graph Rewriting Language and Environment PROGRES
    Ranger, Ulrike
    Weinell, Erhard
    Applications of Graph Transformations with Industrial Relevance, 2008, 5088 : 575 - 576
  • [32] Dactl: an experimental graph rewriting language
    Glauert, JRW
    Kennaway, R
    Papadopoulos, GA
    Sleep, R
    JOURNAL OF PROGRAMMING LANGUAGES, 1997, 5 (01): : 85 - 108
  • [33] Solitons and giants in matrix models
    Andric, Ivan
    Jonke, Larisa
    Jurman, Danijel
    JOURNAL OF HIGH ENERGY PHYSICS, 2006, (12):
  • [34] LARGE OUTPUTS ON SMALL CYCLES
    不详
    CERAMICS, 1967, 18 (217): : 18 - &
  • [35] Rewriting results sections in the language of evidence
    Muff, Stefanie
    Nilsen, Erlend B.
    O'Hara, Robert B.
    Nater, Chloe R.
    TRENDS IN ECOLOGY & EVOLUTION, 2022, 37 (03) : 203 - 210
  • [36] SENSITIVITY ANALYSIS OF NONLINEAR MODELS TO PARAMETER PERTURBATIONS FOR SMALL SIZE ENSEMBLES OF MODEL OUTPUTS
    Ivanov, L. M.
    Tokmakian, R. T.
    INTERNATIONAL JOURNAL OF BIFURCATION AND CHAOS, 2011, 21 (12): : 3589 - 3609
  • [37] DVispatch: A visual language with distributed rewriting
    Miyamoto, K
    Harada, Y
    1998 IEEE SYMPOSIUM ON VISUAL LANGUAGES, PROCEEDINGS, 1998, : 152 - 159
  • [38] Data Sampling and Dimensionality Reduction Approaches for Reranking ASR Outputs Using Discriminative Language Models
    Dikici, Erinc
    Semerci, Murat
    Saraclar, Murat
    Alpaydin, Ethem
    12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5, 2011, : 1472 - +
  • [39] Compiling CIL Rewriting Language for Multiprocessors
    田新民
    王鼎兴
    郑纬民
    沈美明
    李程
    Journal of Computer Science and Technology, 1994, (04) : 302 - 310
  • [40] Relaxed models for rewriting logic
    Lucanu, D
    THEORETICAL COMPUTER SCIENCE, 2003, 290 (01) : 265 - 289