All in One: Multi-Task Prompting for Graph Neural Networks

被引:20
|
作者
Sun, Xiangguo [1 ,2 ]
Cheng, Hong [1 ,2 ]
Li, Jia [3 ]
Liu, Bo [4 ]
Guan, Jihong [5 ]
机构
[1] Chinese Univ Hong Kong, Dept Syst Engn & Engn Management, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Shun Hing Inst Adv Engn, Hong Kong, Peoples R China
[3] Hong Kong Univ Sci & Technol, Data Sci & Analyt Thrust, Guangzhou, Peoples R China
[4] Southeast Univ, Sch Comp Sci & Engn, Purple Mt Labs, Nanjing, Peoples R China
[5] Tongji Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
基金
国家重点研发计划;
关键词
pre-training; prompt tuning; graph neural networks;
D O I
10.1145/3580305.3599256
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, "pre-training and fine-tuning" has been adopted as a standard workflow for many graph tasks since it can take general graph knowledge to relieve the lack of graph annotations from each application. However, graph tasks with node level, edge level, and graph level are far diversified, making the pre-training pretext often incompatible with these multiple tasks. This gap may even cause a "negative transfer" to the specific application, leading to poor results. Inspired by the prompt learning in natural language processing (NLP), which has presented significant effectiveness in leveraging prior knowledge for various NLP tasks, we study the prompting topic for graphs with the motivation of filling the gap between pre-trained models and various graph tasks. In this paper, we propose a novel multi-task prompting method for graph models. Specifically, we first unify the format of graph prompts and language prompts with the prompt token, token structure, and inserting pattern. In this way, the prompting idea from NLP can be seamlessly introduced to the graph area. Then, to further narrow the gap between various graph tasks and state-of-the-art pre-training strategies, we further study the task space of various graph applications and reformulate downstream problems to the graph-level task. Afterward, we introduce meta-learning to efficiently learn a better initialization for the multi-task prompt of graphs so that our prompting framework can be more reliable and general for different tasks. We conduct extensive experiments, results from which demonstrate the superiority of our method.
引用
收藏
页码:2120 / 2131
页数:12
相关论文
共 50 条
  • [31] AAGNet: A graph neural network towards multi-task machining feature recognition
    Wu, Hongjin
    Lei, Ruoshan
    Peng, Yibing
    Gao, Liang
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2024, 86
  • [32] Multi-task learning with graph attention networks for multi-domain task-oriented dialogue systems
    Zhao, Meng
    Wang, Lifang
    Jiang, Zejun
    Li, Ronghan
    Lu, Xinyu
    Hu, Zhongtian
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [33] Cell tracking using deep neural networks with multi-task learning
    He, Tao
    Mao, Hua
    Guo, Jixiang
    Yi, Zhang
    IMAGE AND VISION COMPUTING, 2017, 60 : 142 - 153
  • [34] Simple, Efficient and Convenient Decentralized Multi-task Learning for Neural Networks
    Pilet, Amaury Bouchra
    Frey, Davide
    Taiani, Francois
    ADVANCES IN INTELLIGENT DATA ANALYSIS XIX, IDA 2021, 2021, 12695 : 37 - 49
  • [35] Evolutionary Multi-task Learning for Modular Training of Feedforward Neural Networks
    Chandra, Rohitash
    Gupta, Abhishek
    Ong, Yew-Soon
    Goh, Chi-Keong
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT II, 2016, 9948 : 37 - 46
  • [36] Adaptive Feature Aggregation in Deep Multi-Task Convolutional Neural Networks
    Cui, Chaoran
    Shen, Zhen
    Huang, Jin
    Chen, Meng
    Xu, Mingliang
    Wang, Meng
    Yin, Yilong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (04) : 2133 - 2144
  • [37] Predicting human protein function with multi-task deep neural networks
    Fa, Rui
    Cozzetto, Domenico
    Wan, Cen
    Jones, David T.
    PLOS ONE, 2018, 13 (06):
  • [38] MULTI-TASK LEARNING IN DEEP NEURAL NETWORKS FOR IMPROVED PHONEME RECOGNITION
    Seltzer, Michael L.
    Droppo, Jasha
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 6965 - 6969
  • [39] Improving generalization ability of neural networks ensemble with multi-task learning
    State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China
    不详
    J. Comput. Inf. Syst., 2006, 4 (1235-1240):
  • [40] Upper gastrointestinal anatomy detection with multi-task convolutional neural networks
    Zhang Xu
    Yu Tao
    Zheng Wenfang
    Lin Ne
    Huang Zhengxing
    Liu Jiquan
    Hu Weiling
    Duan Huilong
    Si Jianmin
    HEALTHCARE TECHNOLOGY LETTERS, 2019, 6 (06) : 176 - 180