Emergent analogical reasoning in large language models

被引:0
|
作者
Taylor Webb
Keith J. Holyoak
Hongjing Lu
机构
[1] University of California,Department of Psychology
[2] University of California,Department of Statistics
来源
Nature Human Behaviour | 2023年 / 7卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.
引用
收藏
页码:1526 / 1541
页数:15
相关论文
共 50 条
  • [31] NEWTON: Are Large Language Models Capable of Physical Reasoning?
    Wang, Yi Ru
    Du, Jiafei
    Fox, Dieter
    Srinivasa, Siddhartha
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 9743 - 9758
  • [32] Dynamic Voting for Efficient Reasoning in Large Language Models
    Xue, Mingfeng
    Liu, Dayiheng
    Lei, Wenqiang
    Ren, Xingzhang
    Yang, Baosong
    Xie, Jun
    Zhang, Yidan
    Peng, Dezhong
    Lv, Jiancheng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 3085 - 3104
  • [33] Reasoning with large language models for medical question answering
    Lucas, Mary M.
    Yang, Justin
    Pomeroy, Jon K.
    Yang, Christopher C.
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (09)
  • [34] Rationality of Thought Improves Reasoning in Large Language Models
    Gou, Tian
    Zhang, Boyao
    Sun, Zhenglie
    Wang, Jing
    Liu, Fang
    Wang, Yangang
    Wang, Jue
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT IV, KSEM 2024, 2024, 14887 : 343 - 358
  • [35] Can Language Models Learn Analogical Reasoning? Investigating Training Objectives and Comparisons to Human Performance
    Petersen, Molly R.
    van der Plas, Lonneke
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 16414 - 16425
  • [36] ANALOGICAL - A Novel Benchmark for Long Text Analogy Evaluation in Large Language Models
    Wijesiriwardene, Thilini
    Wickramarachchi, Ruwan
    Gajera, Bimal G.
    Gowaikar, Shreeyash Mukul
    Gupta, Chandan
    Chadha, Aman
    Reganti, Aishwarya Naresh
    Sheth, Amit
    Das, Amitava
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 3534 - 3549
  • [37] ON ANALOGICAL REASONING
    SUNSTEIN, CR
    HARVARD LAW REVIEW, 1993, 106 (03) : 741 - 791
  • [38] Analogical reasoning
    Sowa, JF
    Majumdar, AK
    CONCEPTUAL STRUCTURES FOR KNOWLEDGE CREATION AND COMMUNICATION, 2003, 2746 : 16 - 36
  • [39] VERBAL ANALOGICAL REASONING IN CHILDREN WITH LANGUAGE-LEARNING DISABILITIES
    MASTERSON, JJ
    EVANS, LH
    ALOIA, M
    JOURNAL OF SPEECH AND HEARING RESEARCH, 1993, 36 (01): : 76 - 82
  • [40] NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models
    Zhou, Gengze
    Hong, Yicong
    Wu, Qi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 7641 - 7649