Plan, Generate and Match: Scientific Workflow Recommendation with Large Language Models

被引:2
|
作者
Gu, Yang [1 ]
Cao, Jian [1 ]
Guo, Yuan [1 ]
Qian, Shiyou [1 ]
Guan, Wei [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
基金
美国国家科学基金会;
关键词
Scientific Workflow Recommendation; Large Language Models; Planning; Prompting;
D O I
10.1007/978-3-031-48421-6_7
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The recommendation of scientific workflows from public repositories that meet users' natural language requirements is becoming increasingly essential in the scientific community. Nevertheless, existing methods that rely on direct text matching encounter difficulties when it comes to handling complex queries, which ultimately results in poor performance. Large language models (LLMs) have recently exhibited exceptional ability in planning and reasoning. We propose " Plan, Generate and Match" (PGM), a scientific workflow recommendation method leveraging LLMs. PGM consists of three stages: utilizing LLMs to conduct planning upon receiving a user query, generating a structured workflow specification guided by the solution steps, and using these plans and specifications to match with candidate workflows. By incorporating the planning mechanism, PGM leverages few-shot prompting to automatically generate well-considered steps for instructing the recommendation of reliable workflows. This method represents the first exploration of incorporating LLMs into the scientific workflow domain. Experimental results on real-world benchmarks demonstrate that PGM outperforms state-of-the-art methods with statistical significance, highlighting its immense potential in addressing complex requirements.
引用
收藏
页码:86 / 102
页数:17
相关论文
共 50 条
  • [21] Large Language Models for Recommendation: Past, Present, and Future
    Bao, Keqin
    Zhang, Jizhi
    Lin, Xinyu
    Zhang, Yang
    Wang, Wenjie
    Feng, Fuli
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 2993 - 2996
  • [22] Similarity assessment for scientific workflow clustering and recommendation
    Zhangbing ZHOU
    Zehui CHENG
    Yueqin ZHU
    Science China(Information Sciences), 2016, 59 (11) : 220 - 223
  • [23] Layer-Hierarchical Scientific Workflow Recommendation
    Cheng, Zehui
    Zhou, ZhangBing
    Hung, Patrick C. K.
    Ning, Ke
    Zhang, Liang-Jie
    2016 IEEE INTERNATIONAL CONFERENCE ON WEB SERVICES (ICWS), 2016, : 694 - 699
  • [24] Scientific workflow activity recommendation for interactive modeling
    Wen, Yiping
    Hou, Junjie
    Tan, Zheng
    Liu, Jianxun
    Xu, Xiaolong
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2022, 28 (10): : 3115 - 3121
  • [25] Similarity assessment for scientific workflow clustering and recommendation
    Zhou, Zhangbing
    Cheng, Zehui
    Zhu, Yueqin
    SCIENCE CHINA-INFORMATION SCIENCES, 2016, 59 (11)
  • [26] Exploring Large Language Models to generate Easy to Read content
    Martinez, Paloma
    Ramos, Alberto
    Moreno, Lourdes
    FRONTIERS IN COMPUTER SCIENCE, 2024, 6
  • [27] Evaluating the Ability of Large Language Models to Generate Motivational Feedback
    Gaeta, Angelo
    Orciuoli, Francesco
    Pascuzzo, Antonella
    Peduto, Angela
    GENERATIVE INTELLIGENCE AND INTELLIGENT TUTORING SYSTEMS, PT I, ITS 2024, 2024, 14798 : 188 - 201
  • [28] Can Large Language Models Automatically Generate GIS Reports?
    Starace, Luigi Libero Lucio
    Di Martino, Sergio
    WEB AND WIRELESS GEOGRAPHICAL INFORMATION SYSTEMS, W2GIS 2024, 2024, 14673 : 147 - 161
  • [29] Common Sense Plan Verification with Large Language Models
    Grigorev, Danil S.
    Kovalev, Alexey K.
    Panov, Aleksandr, I
    HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, PT II, HAIS 2024, 2025, 14858 : 224 - 236
  • [30] Towards efficient and effective unlearning of large language models for recommendation
    Wang, Hangyu
    Lin, Jianghao
    Chen, Bo
    Yang, Yang
    Tang, Ruiming
    Zhang, Weinan
    Yu, Yong
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (03)