Self-Attentive Attributed Network Embedding Through Adversarial Learning

被引:8
|
作者
Yu, Wenchao [1 ]
Cheng, Wei [1 ]
Aggarwal, Charu [2 ]
Zong, Bo [1 ]
Chen, Haifeng [1 ]
Wang, Wei [3 ]
机构
[1] NEC Labs Amer Inc, Princeton, NJ 08540 USA
[2] IBM Res AI, Yorktown Hts, NY USA
[3] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90024 USA
关键词
network embedding; attributed network; deep embedding; generative adversarial networks; self-attention;
D O I
10.1109/ICDM.2019.00086
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Network embedding aims to learn the low-dimensional representations/embeddings of vertices which preserve the structure and inherent properties of the networks. The resultant embeddings are beneficial to downstream tasks such as vertex classification and link prediction. A vast majority of real-world networks are coupled with a rich set of vertex attributes, which could be potentially complementary in learning better embeddings. Existing attributed network embedding models, with shallow or deep architectures, typically seek to match the representations in topology space and attribute space for each individual vertex by assuming that the samples from the two spaces are drawn uniformly. The assumption, however, can hardly be guaranteed in practice. Due to the intrinsic sparsity of sampled vertex sequences and incompleteness in vertex attributes, the discrepancy between the attribute space and the network topology space inevitably exists. Furthermore, the interactions among vertex attributes, a.k.a cross features, have been largely ignored by existing approaches. To address the above issues, in this paper, we propose NETTENTION, a self-attentive network embedding approach that can efficiently learn vertex embeddings on attributed network. Instead of sample-wise optimization, NETTENTION aggregates the two types of information through minimizing the difference between the representation distributions in the low-dimensional topology and attribute spaces. The joint inference is encapsulated in a generative adversarial training process, yielding better generalization performance and robustness. The learned distributions consider both locality-preserving and global reconstruction constraints which can be inferred from the learning of the adversarially regularized autoencoders. Additionally, a multi-head self-attention module is developed to explicitly model the attribute interactions. Extensive experiments on benchmark datasets have verified the effectiveness of the proposed NETTENTION model on a variety of tasks, including vertex classification and link prediction.
引用
收藏
页码:758 / 767
页数:10
相关论文
共 50 条
  • [1] Sequential Recommendation with Self-Attentive Multi-Adversarial Network
    Ren, Ruiyang
    Liu, Zhaoyang
    Li, Yaliang
    Zhao, Wayne Xin
    Wang, Hui
    Ding, Bolin
    Wen, Ji-Rong
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 89 - 98
  • [2] A SELF-ATTENTIVE EMOTION RECOGNITION NETWORK
    Partaourides, Harris
    Papadamou, Kostantinos
    Kourtellis, Nicolas
    Leontiades, Ilias
    Chatzis, Sotirios
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7199 - 7203
  • [3] SELF-ATTENTIVE SENTIMENTAL SENTENCE EMBEDDING FOR SENTIMENT ANALYSIS
    Lin, Sheng-Chieh
    Su, Wen-Yuh
    Chien, Po-Chuan
    Tsai, Ming-Feng
    Wang, Chuan-Ju
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1678 - 1682
  • [4] Learning Dynamic Graph Embedding for Traffic Flow Forecasting: A Graph Self-Attentive Method
    Kang, Zifeng
    Xu, Hanwen
    Hu, Jianming
    Pei, Xin
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 2570 - 2576
  • [5] Self-Attentive Pooling for Efficient Deep Learning
    Chen, Fang
    Datta, Gourav
    Kundu, Souvik
    Beerel, Peter A.
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 3963 - 3972
  • [6] Adversarial enhanced attributed network embedding
    Chen, Lei
    Li, Yuan
    Deng, Xingye
    Liu, Canwei
    He, Tingqin
    Xiao, Ruifeng
    KNOWLEDGE AND INFORMATION SYSTEMS, 2024, 66 (02) : 1301 - 1336
  • [7] Adversarial enhanced attributed network embedding
    Lei Chen
    Yuan Li
    Xingye Deng
    Canwei Liu
    Tingqin He
    Ruifeng Xiao
    Knowledge and Information Systems, 2024, 66 (2) : 1301 - 1336
  • [8] SAIN: Self-Attentive Integration Network for Recommendation
    Yun, Seoungjun
    Kim, Raehyun
    Ko, Miyoung
    Kang, Jaewoo
    PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 1205 - 1208
  • [9] Self-Attentive Generative Adversarial Network for Cloud Detection in High Resolution Remote Sensing Images
    Wu, Zhaocong
    Li, Jun
    Wang, Yisong
    Hu, Zhongwen
    Molinier, Matthieu
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2020, 17 (10) : 1792 - 1796
  • [10] Adversarial Learning to Compare: Self-Attentive Prospective Customer Recommendation in Location based Social Networks
    Li, Ruirui
    Wu, Xian
    Wang, Wei
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 349 - 357