Multitask Representation Learning With Multiview Graph Convolutional Networks

被引:20
|
作者
Huang, Hong [1 ,2 ,3 ,4 ]
Song, Yu [1 ,2 ,3 ,4 ]
Wu, Yao [1 ,2 ,3 ,4 ]
Shi, Jia [1 ,2 ,3 ,4 ]
Xie, Xia [1 ,2 ,3 ,4 ]
Jin, Hai [1 ,2 ,3 ,4 ]
机构
[1] Huazhong Univ Sci & Technol, Natl Engn Res Ctr Big Data Technol, Wuhan 430074, Peoples R China
[2] Huazhong Univ Sci & Technol, Serv Comp Technol & Syst Lab, Wuhan 430074, Peoples R China
[3] Huazhong Univ Sci & Technol, Cluster & Grid Comp Lab, Wuhan 430074, Peoples R China
[4] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Data mining; Data models; Correlation; Predictive models; Collaboration; Training data; graph neural networks (GNNs); multitask learning; representation learning;
D O I
10.1109/TNNLS.2020.3036825
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Link prediction and node classification are two important downstream tasks of network representation learning. Existing methods have achieved acceptable results but they perform these two tasks separately, which requires a lot of duplication of work and ignores the correlations between tasks. Besides, conventional models suffer from the identical treatment of information of multiple views, thus they fail to learn robust representation for downstream tasks. To this end, we tackle link prediction and node classification problems simultaneously via multitask multiview learning in this article. We first explain the feasibility and advantages of multitask multiview learning for these two tasks. Then we propose a novel model named MT-MVGCN to perform link prediction and node classification tasks simultaneously. More specifically, we design a multiview graph convolutional network to extract abundant information of multiple views in a network, which is shared by different tasks. We further apply two attention mechanisms: view the attention mechanism and task attention mechanism to make views and tasks adjust the view fusion process. Moreover, view reconstruction can be introduced as an auxiliary task to boost the performance of the proposed model. Experiments on real-world network data sets demonstrate that our model is efficient yet effective, and outperforms advanced baselines in these two tasks.
引用
收藏
页码:983 / 995
页数:13
相关论文
共 50 条
  • [41] Graph Geometric Algebra networks for graph representation learning
    Jianqi Zhong
    Wenming Cao
    Scientific Reports, 15 (1)
  • [42] Learning Laplacians in Chebyshev Graph Convolutional Networks
    Sahbi, Hichem
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 2064 - 2075
  • [43] Extreme Learning Machine to Graph Convolutional Networks
    Goncalves, Thales
    Nonato, Luis Gustavo
    INTELLIGENT SYSTEMS, PT II, 2022, 13654 : 601 - 615
  • [44] Graph learning in low dimensional space for graph convolutional networks
    Zhang, Beixian
    Liu, Meiling
    Zhou, Bo
    Liu, Xingyi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (24) : 34263 - 34279
  • [45] Graph learning in low dimensional space for graph convolutional networks
    Beixian Zhang
    Meiling Liu
    Bo Zhou
    Xingyi Liu
    Multimedia Tools and Applications, 2022, 81 : 34263 - 34279
  • [46] COBRA-GCN: Contrastive Learning to Optimize Binary Representation Analysis with Graph Convolutional Networks
    Wang, Michael
    Interrante-Grant, Alexander
    Whelan, Ryan
    Leek, Tim
    DETECTION OF INTRUSIONS AND MALWARE, AND VULNERABILITY ASSESSMENT, DIMVA 2022, 2022, 13358 : 53 - 74
  • [47] Single-step retrosynthesis prediction via multitask graph representation learning
    Peng-Cheng Zhao
    Xue-Xin Wei
    Qiong Wang
    Qi-Hao Wang
    Jia-Ning Li
    Jie Shang
    Cheng Lu
    Jian-Yu Shi
    Nature Communications, 16 (1)
  • [48] Graph Learning for Multiview Clustering
    Zhan, Kun
    Zhang, Changqing
    Guan, Junpeng
    Wang, Junsheng
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (10) : 2887 - 2895
  • [49] Lateral Representation Learning in Convolutional Neural Networks
    Ballester, Pedro
    Correa, Ulisses Brisolara
    Araujo, Ricardo Matsumura
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [50] The Benefit of Multitask Representation Learning
    Maurer, Andreas
    Pontil, Massimiliano
    Romera-Paredes, Bernardino
    JOURNAL OF MACHINE LEARNING RESEARCH, 2016, 17