DialCoT Meets PPO: Decomposing and Exploring Reasoning Paths in Smaller Language Models

被引:0
|
作者
Han, Chengcheng [1 ,2 ]
Du, Xiaowei [2 ]
Zhang, Che [3 ]
Lian, Yixin [2 ]
Li, Xiang [1 ]
Gao, Ming [1 ,4 ]
Wang, Baoyuan [2 ]
机构
[1] East China Normal Univ, Sch Data Sci & Engn, Shanghai, Peoples R China
[2] Xiaobing AI, Boston, MA 02199 USA
[3] Peking Univ, Sch Software & Microelect, Beijing, Peoples R China
[4] East China Normal Univ, KLATASDS MOE Sch Stat, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Chain-of-Thought (CoT) prompting has proven to be effective in enhancing the reasoning capabilities of Large Language Models (LLMs) with at least 100 billion parameters. However, it is ineffective or even detrimental when applied to reasoning tasks in Smaller Language Models (SLMs) with less than 10 billion parameters. To address this limitation, we introduce Dialogue-guided Chain-of-Thought (DialCoT) which employs a dialogue format to generate intermediate reasoning steps, guiding the model toward the final answer. Additionally, we optimize the model's reasoning path selection using the Proximal Policy Optimization (PPO) algorithm, further enhancing its reasoning capabilities. Our method offers several advantages compared to previous approaches. Firstly, we transform the process of solving complex reasoning questions by breaking them down into a series of simpler sub-questions, significantly reducing the task difficulty and making it more suitable for SLMs. Secondly, we optimize the model's reasoning path selection through the PPO algorithm. We conduct comprehensive experiments on four arithmetic reasoning datasets, demonstrating that our method achieves significant performance improvements compared to state-of-the-art competitors.(1)
引用
收藏
页码:8055 / 8068
页数:14
相关论文
共 12 条
  • [11] Exploring the reversal curse and other deductive logical reasoning in BERT and GPT-based large language models
    Wu, Da
    Yang, Jingye
    Wang, Kai
    PATTERNS, 2024, 5 (09):
  • [12] Larger Encoders, Smaller Regressors: Exploring Label Dimensionality Reduction and Multimodal Large Language Models as Feature Extractors for Predicting Social Perception
    Martin-Fernandez, Ivan
    Esteban-Romero, Sergio
    Bellver-Soler, Jaime
    Fernandez-Martinez, Fernando
    Gil-Martin, Manuel
    PROCEEDINGS OF THE 5TH MULTIMODAL SENTIMENT ANALYSIS CHALLENGE AND WORKSHOP: SOCIAL PERCEPTION AND HUMOR, MUSE 2024, 2024, : 20 - 27