All in One: Multi-Task Prompting for Graph Neural Networks

被引:20
|
作者
Sun, Xiangguo [1 ,2 ]
Cheng, Hong [1 ,2 ]
Li, Jia [3 ]
Liu, Bo [4 ]
Guan, Jihong [5 ]
机构
[1] Chinese Univ Hong Kong, Dept Syst Engn & Engn Management, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Shun Hing Inst Adv Engn, Hong Kong, Peoples R China
[3] Hong Kong Univ Sci & Technol, Data Sci & Analyt Thrust, Guangzhou, Peoples R China
[4] Southeast Univ, Sch Comp Sci & Engn, Purple Mt Labs, Nanjing, Peoples R China
[5] Tongji Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
基金
国家重点研发计划;
关键词
pre-training; prompt tuning; graph neural networks;
D O I
10.1145/3580305.3599256
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, "pre-training and fine-tuning" has been adopted as a standard workflow for many graph tasks since it can take general graph knowledge to relieve the lack of graph annotations from each application. However, graph tasks with node level, edge level, and graph level are far diversified, making the pre-training pretext often incompatible with these multiple tasks. This gap may even cause a "negative transfer" to the specific application, leading to poor results. Inspired by the prompt learning in natural language processing (NLP), which has presented significant effectiveness in leveraging prior knowledge for various NLP tasks, we study the prompting topic for graphs with the motivation of filling the gap between pre-trained models and various graph tasks. In this paper, we propose a novel multi-task prompting method for graph models. Specifically, we first unify the format of graph prompts and language prompts with the prompt token, token structure, and inserting pattern. In this way, the prompting idea from NLP can be seamlessly introduced to the graph area. Then, to further narrow the gap between various graph tasks and state-of-the-art pre-training strategies, we further study the task space of various graph applications and reformulate downstream problems to the graph-level task. Afterward, we introduce meta-learning to efficiently learn a better initialization for the multi-task prompt of graphs so that our prompting framework can be more reliable and general for different tasks. We conduct extensive experiments, results from which demonstrate the superiority of our method.
引用
收藏
页码:2120 / 2131
页数:12
相关论文
共 50 条
  • [41] Deep Adaptive Feature Aggregation in Multi-task Convolutional Neural Networks
    Shen, Zhen
    Cui, Chaoran
    Huang, Jin
    Zong, Jian
    Chen, Meng
    Yin, Yilong
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 2213 - 2216
  • [42] MTGnet: Multi-Task Spatiotemporal Graph Convolutional Networks for Air Quality Prediction
    Lu, Dan
    Chen, Rui
    Sui, Shanshan
    Han, Qilong
    Kong, Linglong
    Wang, Yichen
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [43] Direct Learning With Multi-Task Neural Networks for Treatment Effect Estimation
    Zhu, Fujin
    Lu, Jie
    Lin, Adi
    Xuan, Junyu
    Zhang, Guangquan
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (03) : 2457 - 2470
  • [44] Rapid Adaptation for Deep Neural Networks through Multi-Task Learning
    Huang, Zhen
    Li, Jinyu
    Siniscalchi, Sabato Marco
    Chen, I-Fan
    Wu, Ji
    Lee, Chin-Hui
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 3625 - 3629
  • [45] Evolutionary Multi-task Learning for Modular Knowledge Representation in Neural Networks
    Chandra, Rohitash
    Gupta, Abhishek
    Ong, Yew-Soon
    Goh, Chi-Keong
    NEURAL PROCESSING LETTERS, 2018, 47 (03) : 993 - 1009
  • [46] Implicit Discourse Relation Classification via Multi-Task Neural Networks
    Liu, Yang
    Li, Sujian
    Zhang, Xiaodong
    Sui, Zhifang
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 2750 - 2756
  • [47] Multi-Task Deep Neural Networks for Multimodal Personality Trait Prediction
    Mujtaba, Dena F.
    Mahapatra, Nihar R.
    2021 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2021), 2021, : 85 - 91
  • [48] Generative Neural Networks for Multi-task Life-Long Learning
    Reeder, John
    Georgiopoulos, Michael
    COMPUTER JOURNAL, 2014, 57 (03): : 427 - 450
  • [49] Passenger Demand Forecasting with Multi-Task Convolutional Recurrent Neural Networks
    Bai, Lei
    Yao, Lina
    Kanhere, Sala S.
    Yang, Zheng
    Chu, Jing
    Wang, Xianzhi
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2019, PT II, 2019, 11440 : 29 - 42
  • [50] Evolutionary Multi-task Learning for Modular Knowledge Representation in Neural Networks
    Rohitash Chandra
    Abhishek Gupta
    Yew-Soon Ong
    Chi-Keong Goh
    Neural Processing Letters, 2018, 47 : 993 - 1009