Boosting semi-supervised network representation learning with pseudo-multitasking

被引:0
|
作者
Biao Wang
Zhen Dai
Deshun Kong
Lanlan Yu
Jin Zheng
Ping Li
机构
[1] Southwest Petroleum University,Center for Intelligent and Networked Systems, School of Computer Science
[2] Chinese Research Academy of Eniveronmental Sciences,Institute of Environmental Information
[3] Mianyang Subbranch of Industrial and Commercial Bank of China,School of Information
[4] Huawei Nanjing Research Institute,undefined
[5] Southwest Petroleum University,undefined
来源
Applied Intelligence | 2022年 / 52卷
关键词
Multi-task model; Semi-supervised; Network representation learning; Node classification;
D O I
暂无
中图分类号
学科分类号
摘要
Semi-supervised network representation learning is becoming a hotspot in graph mining community, which aims to learn low-dimensional vector representations of vertices using partial label information. In particular, graph neural networks integrate structural information and other side information like vertex attributes to learn node representations. Although the existing semi-supervised graph learning performs well on limited labeled data, it is still often hampered when labeled dataset is quite small. To mitigate this issue, we propose PMNRL, a pseudo-multitask learning framework for semi-supervised network representation learning to boost the expression power of graph networks such as vanilla GCN (Graph Convolutional Networks) and GAT (Graph Attention Networks). In PMNRL, by leveraging the community structures in networks, we create a pseudo task that classifies nodes’ community affiliation, and conduct a joint learning of two tasks (i.e., the original task and the pseudo task). Our proposed scheme can take advantage of the inherent connection between structural proximity and label similarity to improve the performance without the need to resort to more labels. The proposed framework is implemented in two ways: two-stage method and end-to-end method. For two-stage method, communities are first detected and then the community affiliations are used as “labels” along with original labels to train the joint model. In end-to-end method, the unsupervised community learning is combined into the representation learning process by shared layers and task-specific layers, so as to encourage the common features and specific features for different tasks at the same time. The experimental results on three real-world benchmark networks demonstrate the performance improvement of the vanilla models using our framework without any additional labels, especially when there are quite few labels.
引用
收藏
页码:8118 / 8133
页数:15
相关论文
共 50 条
  • [11] Boosting semi-supervised learning under imbalanced regression via pseudo-labeling
    Zong, Nannan
    Su, Songzhi
    Zhou, Changle
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (19):
  • [12] The Network Representation Learning Algorithm Based on Semi-Supervised Random Walk
    Liu, Dong
    Li, Qinpeng
    Ru, Yan
    Zhang, Jun
    IEEE ACCESS, 2020, 8 : 222956 - 222965
  • [13] On semi-supervised multiple representation behavior learning
    Lu, Ruqian
    Hou, Shengluan
    JOURNAL OF COMPUTATIONAL SCIENCE, 2020, 46
  • [14] Semi-supervised boosting using visual similarity learning
    Leistner, Christian
    Grabner, Helmut
    Bischof, Horst
    2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-12, 2008, : 2237 - 2244
  • [15] MSSBoost: A new multiclass boosting to semi-supervised learning
    Tanha, Jafar
    NEUROCOMPUTING, 2018, 314 : 251 - 266
  • [16] Boosting semi-supervised learning with Contrastive Complementary Labeling
    Deng, Qinyi
    Guo, Yong
    Yang, Zhibang
    Pan, Haolin
    Chen, Jian
    NEURAL NETWORKS, 2024, 170 : 417 - 426
  • [17] Multiclass Semi-Supervised Boosting Using Similarity Learning
    Tanha, Jafar
    Saberian, Mohammad Javad
    van Someren, Maarten
    2013 IEEE 13TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2013, : 1205 - 1210
  • [18] Semi-supervised Deep Network Representation with Text Information
    Ming, Xinchun
    Hu, Fangyu
    2017 12TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND KNOWLEDGE ENGINEERING (IEEE ISKE), 2017,
  • [19] BoostMIS: Boosting Medical Image Semi-supervised Learning with Adaptive Pseudo Labeling and Informative Active Annotation
    Zhang, Wenqiao
    Zhu, Lei
    Hallinan, James
    Zhang, Shengyu
    Makmur, Andrew
    Cai, Qingpeng
    Ooi, Beng Chin
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 20634 - 20644
  • [20] Boosting Semi-Supervised Learning by Exploiting All Unlabeled Data
    Chen, Yuhao
    Tan, Xin
    Zhao, Borui
    Chen, Zhaowei
    Song, Renjie
    Liang, Jiajun
    Lu, Xuequan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7548 - 7557