InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference Alignment

被引:0
|
作者
Wang, Jianing [1 ,2 ]
Wu, Junda [2 ]
Hon, Yupeng [2 ]
Liu, Yao [1 ]
Gao, Ming [1 ]
McAuley, Julian [2 ]
机构
[1] East China Normal Univ, Shanghai, Peoples R China
[2] Univ Calif San Diego, La Jolla, CA 92093 USA
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Do current large language models (LLMs) better solve graph reasoning and generation tasks with parameter updates? In this paper, we propose InstructGraph, a framework that empowers LLMs with the abilities of graph reasoning and generation by instruction tuning and preference alignment. Specifically, we first propose a structured format verbalizer to unify all graph data into a universal code-like format, which can simply represent the graph without any external graph-specific encoders. Furthermore, a graph instruction tuning stage is introduced to guide LLMs in solving graph reasoning and generation tasks. Finally, we identify potential hallucination problems in graph tasks and sample negative instances for preference alignment, the target of which is to enhance the output's reliability of the model. Extensive experiments across multiple graph-centric tasks exhibit that InstructGraph can achieve the best performance and outperform GPT-4 and LLaMA2 by more than 13% and 38%, respectively.
引用
收藏
页码:13492 / 13510
页数:19
相关论文
共 41 条
  • [31] Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
    Ozdayi, Mustafa Safa
    Peris, Charith
    Fitzgerald, Jack
    Dupuy, Christophe
    Majmudar, Jimit
    Khan, Haidar
    Parikh, Rahil
    Gupta, Rahul
    61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1512 - 1521
  • [32] ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding
    Zhong, Qihuang
    Ding, Liang
    Liu, Juhua
    Du, Bo
    Tao, Dacheng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 13721 - 13736
  • [33] Intelligent Checking Method for Construction Schemes via Fusion of Knowledge Graph and Large Language Models
    Li, Hao
    Yang, Rongzheng
    Xu, Shuangshuang
    Xiao, Yao
    Zhao, Hongyu
    BUILDINGS, 2024, 14 (08)
  • [34] Child-Centric Robot Dialogue Systems: Fine-Tuning Large Language Models for Better Utterance Understanding and Interaction
    Kim, Da-Young
    Lym, Hyo Jeong
    Lee, Hanna
    Lee, Ye Jun
    Kim, Juhyun
    Kim, Min-Gyu
    Baek, Yunju
    Sensors, 2024, 24 (24)
  • [35] Efficient Fine-Tuning of Large Language Models via a Low-Rank Gradient Estimator
    Zhang, Luoming
    Lou, Zhenyu
    Ying, Yangwei
    Yang, Cheng
    Zhou, Hong
    APPLIED SCIENCES-BASEL, 2025, 15 (01):
  • [36] Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-Alignment
    Lu, Keming
    Yu, Bowen
    Zhou, Chang
    Zhou, Jingren
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 7828 - 7840
  • [37] Enhancing In-Context Learning of Large Language Models for Knowledge Graph Reasoning via Rule-and-Reinforce Selected Triples
    Wang, Shaofei
    APPLIED SCIENCES-BASEL, 2025, 15 (03):
  • [38] TS-HTFA: Advancing Time-Series Forecasting via Hierarchical Text-Free Alignment with Large Language Models
    Wang, Pengfei
    Zheng, Huanran
    Xu, Qi'ao
    Dai, Silong
    Wang, Yiqiao
    Yue, Wenjing
    Zhu, Wei
    Qian, Tianwen
    Zhao, Liang
    SYMMETRY-BASEL, 2025, 17 (03):
  • [39] Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
    Kim, Jeonghoon
    Lee, Jung Hyun
    Kim, Sungdong
    Park, Joonsuk
    Yoo, Kang Min
    Kwon, Se Jung
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [40] Quant-LLM: Accelerating the Serving of Large Language Models via FP6-Centric Algorithm-System Co-Design on Modern GPUs
    Xia, Haojun
    Zheng, Zhen
    Wu, Xiaoxia
    Chen, Shiyang
    Yao, Zhewei
    Youn, Stephen
    Bakhtiari, Arash
    Wyatt, Michael
    Zhuang, Donglin
    Zhou, Zhongzhu
    Ruwase, Olatunji
    He, Yuxiong
    Song, Shuaiwen Leon
    PROCEEDINGS OF THE 2024 USENIX ANNUAL TECHNICAL CONFERENCE, ATC 2024, 2024, : 699 - 713