Maximizing Mutual Information Across Feature and Topology Views for Representing Graphs

被引:6
|
作者
Fan, Xiaolong [1 ]
Gong, Maoguo [1 ]
Wu, Yue [2 ]
Li, Hao [1 ]
机构
[1] Xidian Univ, Sch Elect Engn, Key Lab Collaborat Intelligence Syst, Minist Educ, Xian 710071, Shaanxi, Peoples R China
[2] Xidian Univ, Sch Comp Sci & Technol, Key Lab Collaborat Intelligence Syst, Minist Educ, Xian, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Mutual information; Topology; Representation learning; Network topology; Graph neural networks; Task analysis; Message passing; Graph mining; graph neural network; graph representation learning; mutual information maximization;
D O I
10.1109/TKDE.2023.3264512
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, maximizing mutual information has emerged as a powerful tool for unsupervised graph representation learning. Existing methods are typically effective in capturing graph information from the topology view but consistently ignore the node feature view. To circumvent this problem, we propose a novel method by exploiting mutual information maximization across feature and topology views. Specifically, we first construct the feature graph to capture the underlying structure of nodes in feature spaces by measuring the distance between pairs of nodes. Then we use a cross-view representation learning module to capture both local and global information content across feature and topology views on graphs. To model the information shared by the feature and topology spaces, we develop a common representation learning module by using mutual information maximization and reconstruction loss minimization. Here, minimizing reconstruction loss forces the model to learn the shared information of feature and topology spaces. To explicitly encourage diversity between graph representations from the same view, we also introduce a disagreement regularization to enlarge the distance between representations from the same view. Experiments on synthetic and real-world datasets demonstrate the effectiveness of integrating feature and topology views. In particular, compared with the previous supervised methods, the proposed method achieves comparable or even better performance under the unsupervised representation and linear evaluation protocol.
引用
收藏
页码:10735 / 10747
页数:13
相关论文
共 50 条
  • [21] Water from Two Rocks: Maximizing the Mutual Information
    Kong, Yuqing
    Schoenebeck, Grant
    ACM EC'18: PROCEEDINGS OF THE 2018 ACM CONFERENCE ON ECONOMICS AND COMPUTATION, 2018, : 177 - 194
  • [22] Learning LBP structure by maximizing the conditional mutual information
    Ren, Jianfeng
    Jiang, Xudong
    Yuan, Junsong
    PATTERN RECOGNITION, 2015, 48 (10) : 3180 - 3190
  • [23] Mutual Information Criteria for Feature Selection
    Zhang, Zhihong
    Hancock, Edwin R.
    SIMILARITY-BASED PATTERN RECOGNITION: FIRST INTERNATIONAL WORKSHOP, SIMBAD 2011, 2011, 7005 : 235 - 249
  • [24] Normalized Mutual Information Feature Selection
    Estevez, Pablo. A.
    Tesmer, Michel
    Perez, Claudio A.
    Zurada, Jacek A.
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (02): : 189 - 201
  • [25] Mutual Information Criteria for Feature Selection
    Zhang, Zhihong
    Hancock, Edwin R.
    SIMILARITY-BASED PATTERN RECOGNITION, 2011, 7005 : 235 - 249
  • [26] Weighted Mutual Information for Feature Selection
    Schaffernicht, Erik
    Gross, Horst-Michael
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2011, PT II, 2011, 6792 : 181 - 188
  • [27] Multi-feature mutual information
    Tomazevic, D
    Likar, B
    Pernus, F
    MEDICAL IMAGING 2004: IMAGE PROCESSING, PTS 1-3, 2004, 5370 : 143 - 154
  • [28] Quadratic Mutual Information Feature Selection
    Sluga, Davor
    Lotric, Uros
    ENTROPY, 2017, 19 (04)
  • [29] Feature selection with dynamic mutual information
    Liu, Huawen
    Sun, Jigui
    Liu, Lei
    Zhang, Huijie
    PATTERN RECOGNITION, 2009, 42 (07) : 1330 - 1339
  • [30] On feature extraction by mutual information maximization
    Torkkola, K
    2002 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I-IV, PROCEEDINGS, 2002, : 821 - 824