Self-Attentive Attributed Network Embedding Through Adversarial Learning

被引:8
|
作者
Yu, Wenchao [1 ]
Cheng, Wei [1 ]
Aggarwal, Charu [2 ]
Zong, Bo [1 ]
Chen, Haifeng [1 ]
Wang, Wei [3 ]
机构
[1] NEC Labs Amer Inc, Princeton, NJ 08540 USA
[2] IBM Res AI, Yorktown Hts, NY USA
[3] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90024 USA
关键词
network embedding; attributed network; deep embedding; generative adversarial networks; self-attention;
D O I
10.1109/ICDM.2019.00086
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Network embedding aims to learn the low-dimensional representations/embeddings of vertices which preserve the structure and inherent properties of the networks. The resultant embeddings are beneficial to downstream tasks such as vertex classification and link prediction. A vast majority of real-world networks are coupled with a rich set of vertex attributes, which could be potentially complementary in learning better embeddings. Existing attributed network embedding models, with shallow or deep architectures, typically seek to match the representations in topology space and attribute space for each individual vertex by assuming that the samples from the two spaces are drawn uniformly. The assumption, however, can hardly be guaranteed in practice. Due to the intrinsic sparsity of sampled vertex sequences and incompleteness in vertex attributes, the discrepancy between the attribute space and the network topology space inevitably exists. Furthermore, the interactions among vertex attributes, a.k.a cross features, have been largely ignored by existing approaches. To address the above issues, in this paper, we propose NETTENTION, a self-attentive network embedding approach that can efficiently learn vertex embeddings on attributed network. Instead of sample-wise optimization, NETTENTION aggregates the two types of information through minimizing the difference between the representation distributions in the low-dimensional topology and attribute spaces. The joint inference is encapsulated in a generative adversarial training process, yielding better generalization performance and robustness. The learned distributions consider both locality-preserving and global reconstruction constraints which can be inferred from the learning of the adversarially regularized autoencoders. Additionally, a multi-head self-attention module is developed to explicitly model the attribute interactions. Extensive experiments on benchmark datasets have verified the effectiveness of the proposed NETTENTION model on a variety of tasks, including vertex classification and link prediction.
引用
收藏
页码:758 / 767
页数:10
相关论文
共 50 条
  • [41] Identifying polyadenylation signals with biological embedding via self-attentive gated convolutional highway networks
    Guo, Yanbu
    Zhou, Dongming
    Li, Weihua
    Cao, Jinde
    Nie, Rencan
    Xiong, Lei
    Ruan, Xiaoli
    Zhou, Dongming (zhoudm@ynu.edu.cn), 1600, Elsevier Ltd (103):
  • [42] SACNN: Self-attentive Convolutional Neural Network Model for Natural Language Inference
    Quamer, Waris
    Jain, Praphula Kumar
    Rai, Arpit
    Saravanan, Vijayalakshmi
    Pamula, Rajendra
    Kumar, Chiranjeev
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2021, 20 (03)
  • [43] Improving Context-Aware Neural Machine Translation Using Self-Attentive Sentence Embedding
    Yun, Hyeongu
    Hwang, Yongkeun
    Jung, Kyomin
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9498 - 9506
  • [44] Identifying polyadenylation signals with biological embedding via self-attentive gated convolutional highway networks
    Guo, Yanbu
    Zhou, Dongming
    Li, Weihua
    Cao, Jinde
    Nie, Rencan
    Xiong, Lei
    Ruan, Xiaoli
    APPLIED SOFT COMPUTING, 2021, 103
  • [45] A hierarchical self-attentive neural extractive summarizer via reinforcement learning (HSASRL)
    Mohsen, Farida
    Wang, Jiayang
    Al-Sabahi, Kamal
    APPLIED INTELLIGENCE, 2020, 50 (09) : 2633 - 2646
  • [46] Structural Adversarial Variational Auto-Encoder for Attributed Network Embedding
    Zhan, Junjian
    Li, Feng
    Wang, Yang
    Lin, Daoyu
    Xu, Guangluan
    APPLIED SCIENCES-BASEL, 2021, 11 (05): : 1 - 11
  • [47] Dual Attention-Based Adversarial Autoencoder for Attributed Network Embedding
    Liu, Ming
    Liao, Jianxin
    Wang, Jingyu
    Qi, Qi
    Sun, Haifeng
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 521 - 528
  • [48] Disentangled Self-Attentive Neural Networks for Click-Through Rate Prediction
    Xu, Yichen
    Zhu, Yanqiao
    Yu, Feng
    Liu, Qiang
    Wu, Shu
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3553 - 3557
  • [49] Self-attentive deep learning method for online traffic classification and its interpretability
    Xie, Guorui
    Li, Qing
    Jiang, Yong
    COMPUTER NETWORKS, 2021, 196
  • [50] Self-Attentive Sequential Recommendations with Hyperbolic Representations
    Frolov, Evgeny
    Matveeva, Tatyana
    Mirvakhabova, Leyla
    Oseledets, Ivan
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 981 - 986