Small Language Models Need Strong Verifiers to Self-Correct Reasoning

被引:0
|
作者
Zhang, Yunxiang [1 ]
Khalifa, Muhammad [1 ]
Logeswaran, Lajanugen [2 ]
Kim, Jaekyeom [2 ]
Lee, Moontae [2 ,3 ]
Lee, Honglak [1 ,2 ]
Wang, Lu [1 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
[2] LG AI Res, Seoul, South Korea
[3] Univ Illinois, Chicago, IL USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-correction has emerged as a promising solution to boost the reasoning performance of large language models (LLMs), where LLMs refine their solutions using self-generated critiques that pinpoint the errors. This work explores whether small (<= 13B) language models (LMs) have the ability of self-correction on reasoning tasks with minimal inputs from stronger LMs. We propose a novel pipeline that prompts smaller LMs to collect self-correction data that supports the training of self-refinement abilities. First, we leverage correct solutions to guide the model in critiquing their incorrect responses. Second, the generated critiques, after filtering, are used for supervised fine-tuning of the self-correcting reasoner through solution refinement. Our experimental results show improved self-correction abilities of two models on five datasets spanning math and commonsense reasoning, with notable performance gains when paired with a strong GPT-4-based verifier, though limitations are identified when using a weak self-verifier for determining when to correct.
引用
收藏
页码:15637 / 15653
页数:17
相关论文
共 24 条
  • [1] Small Language Model Can Self-Correct
    Han, Haixia
    Liang, Jiaqing
    Shi, Jie
    He, Qianyu
    Xiao, Yanghua
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 18162 - 18170
  • [2] Does Medicine Need to Accommodate Positive Conscientious Objections to Morally Self-Correct?
    Kim, Eric J.
    Ferguson, Kyle
    AMERICAN JOURNAL OF BIOETHICS, 2021, 21 (08): : 74 - 76
  • [3] Secondchance for secondlife - Virtual worlds need the freedom to self-correct, argues Robert bloomfield
    Bloomfield, Robert
    TECHNOLOGY REVIEW, 2008, 111 (01) : 12 - 13
  • [4] Distilling mathematical reasoning capabilities into Small Language Models
    Zhu, Xunyu
    Li, Jian
    Liu, Yong
    Ma, Can
    Wang, Weiping
    NEURAL NETWORKS, 2024, 179
  • [5] Mathematical Reasoning via Multi-step Self Questioning and Answering for Small Language Models
    Chen, Kaiyuan
    Wang, Jin
    Zhang, Xuejie
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT IV, NLPCC 2024, 2025, 15362 : 81 - 93
  • [7] Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning
    Juneja, Gurusha
    Dutta, Subhabrata
    Chakrabarti, Soumen
    Manchhanda, Sunny
    Chakraborty, Tanmoy
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3675 - 3691
  • [8] Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models
    Chen, Zixiang
    Deng, Yihe
    Yuan, Huizhuo
    Ji, Kaixuan
    Gu, Quanquan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, 2024, 235
  • [9] From Complex to Simple: Unraveling the Cognitive Tree for Reasoning with Small Language Models
    Yan, Junbing
    Wang, Chengyu
    Zhang, Taolin
    He, Xiaofeng
    Huang, Jun
    Zhang, Wei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 12413 - 12425
  • [10] Small language models learn enhanced reasoning skills from medical textbooks
    Hyunjae Kim
    Hyeon Hwang
    Jiwoo Lee
    Sihyeon Park
    Dain Kim
    Taewhoo Lee
    Chanwoong Yoon
    Jiwoong Sohn
    Jungwoo Park
    Olga Reykhart
    Thomas Fetherston
    Donghee Choi
    Soo Heon Kwak
    Qingyu Chen
    Jaewoo Kang
    npj Digital Medicine, 8 (1)