Large Language Models Need Symbolic AI

被引:0
|
作者
Hammond, Kristian [1 ]
Leake, David [2 ]
机构
[1] Northwestern Univ, Mudd Hall, Evanston, IL 60208 USA
[2] Indiana Univ, Luddy Hall, Bloomington, IN 47408 USA
关键词
ChatGPT; Large language models; Natural Language Understanding; Neuro-Symbolic AI;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The capability of systems based on large language models (LLMs), such as ChatGPT, to generate human-like text has captured the attention of the public and the scientific community. It has prompted both predictions that systems such as ChatGPT will transform AI and enumerations of system problems with hopes of solving them by scale and training. This position paper argues that both over-optimistic views and disppointments reflect misconceptions of the fundamental nature of LLMs as language models. As such, they are statistical models of language production and fluency, with associated strengths and limitations; they are not-and should not be expected to be-knowledge models of the world, nor do they reflect the core role of language beyond the statistics: communication. The paper argues that realizing that role will require driving LLMs with symbolic systems based on goals, facts, reasoning, and memory.
引用
收藏
页数:6
相关论文
共 50 条