Rationality of Thought Improves Reasoning in Large Language Models

被引:0
|
作者
Gou, Tian [1 ,2 ]
Zhang, Boyao [1 ,2 ]
Sun, Zhenglie [1 ,2 ]
Wang, Jing [1 ,2 ]
Liu, Fang [1 ,2 ]
Wang, Yangang [1 ,2 ]
Wang, Jue [1 ,2 ]
机构
[1] Chinese Acad Sci, Comp Network Informat Ctr, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
基金
国家重点研发计划; 北京市自然科学基金;
关键词
Large Language Models (LLMs); Zero-Shot Reasoning; Cognitive foundations of knowledge; Rationality of Thought (RoT); Cognitive Psychology; Cognitive Bias Dataset; HEURISTICS; FALLACY;
D O I
10.1007/978-981-97-5501-1_26
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While the capabilities of large language models (LLMs) have been progressively advanced, their competence in addressing intricate reasoning tasks remains inadequate, primarily due to their insufficient cognitive capabilities. To explore the cognitive proficiency of models like GPT-4, we turn to methodologies from cognitive psychology: cognitive abilities reflect rational thinking skills, and cognitive bias tasks are often used to assess rational thinking levels. In this paper, we develop a cognitive bias dataset to measure the rational thinking and cognitive levels of LLMs. Our observations indicate that GPT-4, akin to humans, exhibits limitations in its rational thinking ability. We propose a new method, "Rationality of Thought" (RoT), to prompt LLMs into a rational thinking process during task execution. This method significantly improves the accuracy of GPT-4 on the cognitive bias task by 18.7%. Cognitive capacity is also essential for tackling complex issues, therefore, we implement RoT across various reasoning tasks. Using only a zero-shot setting, RoT outperforms inference enhancement techniques such as CoT using zero-shot, such as SVAMP(+1.8),AQUA-RAT (+6.0), ARC-c (+4.1),ARCe(+3.9) in multiple arithmetic and common sense reasoning tasks. Our empirical evaluation shows that RoT helps LLMs elevate their cognitive capabilities through rational thinking, thereby becoming more adept at navigating complex reasoning tasks.
引用
收藏
页码:343 / 358
页数:16
相关论文
共 50 条
  • [31] Reasoning with large language models for medical question answering
    Lucas, Mary M.
    Yang, Justin
    Pomeroy, Jon K.
    Yang, Christopher C.
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (09)
  • [32] Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
    Wang, Lei
    Xu, Wanyu
    Lan, Yihuai
    Hu, Zhiqiang
    Lan, Yunshi
    Lee, Roy Ka-Wei
    Lim, Ee-Peng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 2609 - 2634
  • [33] NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models
    Zhou, Gengze
    Hong, Yicong
    Wu, Qi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 7641 - 7649
  • [34] IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models
    You, Haoxuan
    Sun, Rui
    Wang, Zhecan
    Chen, Long
    Wang, Gengyu
    Ayyubi, Hammad A.
    Chang, Kai-Wei
    Chang, Shih-Fu
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 11289 - 11303
  • [35] LANGUAGE AND THOUGHT, JUDGMENT AND REASONING OF THE CHILD
    White, William A.
    PSYCHOANALYTIC REVIEW, 1929, 16 (03): : 312 - 321
  • [36] Dissociating Language and Thought in Human Reasoning
    Coetzee, John P. P.
    Johnson, Micah A. A.
    Lee, Youngzie
    Wu, Allan D. D.
    Iacoboni, Marco
    Monti, Martin M. M.
    BRAIN SCIENCES, 2023, 13 (01)
  • [37] Towards Analysis and Interpretation of Large Language Models for Arithmetic Reasoning
    Akter, Mst Shapna
    Shahriar, Hossain
    Cuzzocrea, Alfredo
    2024 11TH IEEE SWISS CONFERENCE ON DATA SCIENCE, SDS 2024, 2024, : 267 - 270
  • [38] On Implementing Case-Based Reasoning with Large Language Models
    Wilkerson, Kaitlynne
    Leake, David
    CASE-BASED REASONING RESEARCH AND DEVELOPMENT, ICCBR 2024, 2024, 14775 : 404 - 417
  • [39] Reasoning with Large Language Models on Graph Tasks: The Influence of Temperature
    Wang, Yiming
    Zhang, Ziyang
    Chen, Hanwei
    Shen, Huayi
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024, 2024, : 630 - 634
  • [40] Over-Reasoning and Redundant Calculation of Large Language Models
    Chiang, Cheng-Han
    Lee, Hung-yi
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2: SHORT PAPERS, 2024, : 161 - 169