InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference Alignment

被引:0
|
作者
Wang, Jianing [1 ,2 ]
Wu, Junda [2 ]
Hon, Yupeng [2 ]
Liu, Yao [1 ]
Gao, Ming [1 ]
McAuley, Julian [2 ]
机构
[1] East China Normal Univ, Shanghai, Peoples R China
[2] Univ Calif San Diego, La Jolla, CA 92093 USA
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Do current large language models (LLMs) better solve graph reasoning and generation tasks with parameter updates? In this paper, we propose InstructGraph, a framework that empowers LLMs with the abilities of graph reasoning and generation by instruction tuning and preference alignment. Specifically, we first propose a structured format verbalizer to unify all graph data into a universal code-like format, which can simply represent the graph without any external graph-specific encoders. Furthermore, a graph instruction tuning stage is introduced to guide LLMs in solving graph reasoning and generation tasks. Finally, we identify potential hallucination problems in graph tasks and sample negative instances for preference alignment, the target of which is to enhance the output's reliability of the model. Extensive experiments across multiple graph-centric tasks exhibit that InstructGraph can achieve the best performance and outperform GPT-4 and LLaMA2 by more than 13% and 38%, respectively.
引用
收藏
页码:13492 / 13510
页数:19
相关论文
共 41 条
  • [21] JMedLoRA:Medical Domain Adaptation on Japanese Large Language Models using Instruction-tuning
    Sukeda, Issey
    Suzuki, Masahiro
    Kodera, Satoshi
    Sakaji, Hiroki
    arXiv, 2023,
  • [22] Fine-Tuning Large Enterprise Language Models via Ontological Reasoning
    Baldazzi, Teodoro
    Bellomarini, Luigi
    Ceri, Stefano
    Colombo, Andrea
    Gentili, Andrea
    Sallinger, Emanuel
    RULES AND REASONING, RULEML+RR 2023, 2023, 14244 : 86 - 94
  • [23] MINT: Boosting Audio-Language Model via Multi-Target Pre-Training and Instruction Tuning
    Zhao, Hang
    Xing, Yifei
    Yu, Zhesong
    Zhu, Bilei
    Lu, Lu
    Ma, Zejun
    INTERSPEECH 2024, 2024, : 52 - 56
  • [24] Knowledge Graph-Enhanced Large Language Models via Path Selection
    Liu, Haochen
    Wang, Song
    Zhu, Yaochen
    Dong, Yushun
    Li, Jundong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 6311 - 6321
  • [25] Enhancing Visual Information Extraction with Large Language Models Through Layout-Aware Instruction Tuning
    Li, Teng
    Wang, Jiapeng
    Jin, Lianwen
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VII, 2025, 15037 : 276 - 289
  • [26] EcomGPT: Instruction-Tuning Large Language Models with Chain-of-Task Tasks for E-commerce
    Li, Yangning
    Ma, Shirong
    Wang, Xiaobin
    Huang, Shen
    Jiang, Chengyue
    Zheng, Hai-Tao
    Xie, Pengjun
    Huang, Fei
    Jiang, Yong
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 18582 - 18590
  • [27] DolphCoder: Echo-Locating Code Large Language Models with Diverse and Multi-Objective Instruction Tuning
    Wang, Yejie
    He, Keqing
    Dong, Guanting
    Wang, Pei
    Zeng, Weihao
    Diao, Muxi
    Zhang, Mengdi
    Wang, Jingang
    Cai, Xunliang
    Xu, Weiran
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 4706 - 4721
  • [28] Empowering Legal Citation Recommendation via Efficient Instruction-Tuning of Pre-trained Language Models
    Wang, Jie
    Bansal, Kanha
    Arapakis, Ioannis
    Ge, Xuri
    Jose, Joemon M.
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT I, 2024, 14608 : 310 - 324
  • [29] Supporting Business Document Workflows via Collection-Centric Information Foraging with Large Language Models
    Fok, Raymond
    Lipka, Nedim
    Sun, Tong
    Siu, Alexa
    PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS, CHI 2024, 2024,
  • [30] Automated taxonomy alignment via large language models: bridging the gap between knowledge domains
    Cui, Wentao
    Xiao, Meng
    Wang, Ludi
    Wang, Xuezhi
    Du, Yi
    Zhou, Yuanchun
    SCIENTOMETRICS, 2024, 129 (09) : 5287 - 5312