Generalized Planning in PDDL Domains with Pretrained Large Language Models

被引:0
|
作者
Silver, Tom [1 ]
Dan, Soham [2 ]
Srinivas, Kavitha [2 ]
Tenenbaum, Joshua B. [1 ]
Kaelbling, Leslie [1 ]
Katz, Michael [2 ]
机构
[1] MIT Computer Science and Artificial Intelligence, Cambridge, MA 02139 USA
[2] IBM Res, Yorktown Hts, NY USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent work has considered whether large language models (LLMs) can function as planners: given a task, generate a plan. We investigate whether LLMs can serve as generalized planners: given a domain and training tasks, generate a program that efficiently produces plans for other tasks in the domain. In particular, we consider PDDL domains and use GPT-4 to synthesize Python programs. We also consider (1) Chain-of-Thought (CoT) summarization, where the LLM is prompted to summarize the domain and propose a strategy in words before synthesizing the program; and (2) automated debugging, where the program is validated with respect to the training tasks, and in case of errors, the LLM is re-prompted with four types of feedback. We evaluate this approach in seven PDDL domains and compare it to four ablations and four baselines. Overall, we find that GPT-4 is a surprisingly powerful generalized planner. We also conclude that automated debugging is very important, that CoT summarization has non-uniform impact, that GPT-4 is far superior to GPT3.5, and that just two training tasks are often sufficient for strong generalization.
引用
收藏
页码:20256 / 20264
页数:9
相关论文
共 50 条
  • [1] PDDL2.1: An extension to PDDL for expressing temporal planning domains
    Fox, M
    Long, D
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2003, 20 : 61 - 124
  • [2] Large Product Key Memory for Pretrained Language Models
    Kim, Gyuwan
    Jung, Tae-Hwan
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 4060 - 4069
  • [3] Application of Pretrained Large Language Models in Embodied Artificial Intelligence
    A. K. Kovalev
    A. I. Panov
    [J]. Doklady Mathematics, 2022, 106 : S85 - S90
  • [4] Application of Pretrained Large Language Models in Embodied Artificial Intelligence
    Kovalev, A. K.
    Panov, A. I.
    [J]. DOKLADY MATHEMATICS, 2022, 106 (SUPPL 1) : S85 - S90
  • [5] Online Learning of Action Models for PDDL Planning
    Lamanna, Leonardo
    Saetti, Alessandro
    Serafini, Luciano
    Gerevini, Alfonso E.
    Traverso, Paolo
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4112 - 4118
  • [6] A Survey of Pretrained Language Models
    Sun, Kaili
    Luo, Xudong
    Luo, Michael Y.
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT II, 2022, 13369 : 442 - 456
  • [7] Creating AI Planning Domains for Smart Environments Using PDDL
    Marquardt, Florian
    Uhrmacher, Adelinde
    [J]. INTELLIGENT INTERACTIVE ASSISTANCE AND MOBILE MULTIMEDIA COMPUTING, 2009, 53 : 263 - 274
  • [8] Adapt-and-Distill: Developing Small, Fast and Effective Pretrained Language Models for Domains
    Yao, Yunzhi
    Huang, Shaohan
    Wang, Wenhui
    Dong, Li
    Wei, Furu
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 460 - 470
  • [9] Geographic Adaptation of Pretrained Language Models
    Hofmann, Valentin
    Glavas, Goran
    Ljubesic, Nikola
    Pierrehumbert, Janet B.
    Schuetze, Hinrich
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2024, 12 : 411 - 431
  • [10] Generating Datasets with Pretrained Language Models
    Schick, Timo
    Schuetze, Hinrich
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 6943 - 6951