Self-Attentive Attributed Network Embedding Through Adversarial Learning

被引:8
|
作者
Yu, Wenchao [1 ]
Cheng, Wei [1 ]
Aggarwal, Charu [2 ]
Zong, Bo [1 ]
Chen, Haifeng [1 ]
Wang, Wei [3 ]
机构
[1] NEC Labs Amer Inc, Princeton, NJ 08540 USA
[2] IBM Res AI, Yorktown Hts, NY USA
[3] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90024 USA
关键词
network embedding; attributed network; deep embedding; generative adversarial networks; self-attention;
D O I
10.1109/ICDM.2019.00086
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Network embedding aims to learn the low-dimensional representations/embeddings of vertices which preserve the structure and inherent properties of the networks. The resultant embeddings are beneficial to downstream tasks such as vertex classification and link prediction. A vast majority of real-world networks are coupled with a rich set of vertex attributes, which could be potentially complementary in learning better embeddings. Existing attributed network embedding models, with shallow or deep architectures, typically seek to match the representations in topology space and attribute space for each individual vertex by assuming that the samples from the two spaces are drawn uniformly. The assumption, however, can hardly be guaranteed in practice. Due to the intrinsic sparsity of sampled vertex sequences and incompleteness in vertex attributes, the discrepancy between the attribute space and the network topology space inevitably exists. Furthermore, the interactions among vertex attributes, a.k.a cross features, have been largely ignored by existing approaches. To address the above issues, in this paper, we propose NETTENTION, a self-attentive network embedding approach that can efficiently learn vertex embeddings on attributed network. Instead of sample-wise optimization, NETTENTION aggregates the two types of information through minimizing the difference between the representation distributions in the low-dimensional topology and attribute spaces. The joint inference is encapsulated in a generative adversarial training process, yielding better generalization performance and robustness. The learned distributions consider both locality-preserving and global reconstruction constraints which can be inferred from the learning of the adversarially regularized autoencoders. Additionally, a multi-head self-attention module is developed to explicitly model the attribute interactions. Extensive experiments on benchmark datasets have verified the effectiveness of the proposed NETTENTION model on a variety of tasks, including vertex classification and link prediction.
引用
收藏
页码:758 / 767
页数:10
相关论文
共 50 条
  • [21] Analysis of Sentiment on Movie Reviews Using Word Embedding Self-Attentive LSTM
    Sivakumar, Soubraylu
    Rajalakshmi, Ratnavel
    INTERNATIONAL JOURNAL OF AMBIENT COMPUTING AND INTELLIGENCE, 2021, 12 (02) : 33 - 52
  • [22] On the Robustness of Self-Attentive Models
    Hsieh, Yu-Lun
    Cheng, Minhao
    Juan, Da-Cheng
    Wei, Wei
    Hsu, Wen-Lian
    Hsieh, Cho-Jui
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 1520 - 1529
  • [23] Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics
    Ng, Tiong-Sik
    Chai, Jacky Chen Long
    Low, Cheng-Yaw
    Beng Jin Teoh, Andrew
    IEEE Transactions on Information Forensics and Security, 2024, 19 : 3251 - 3264
  • [24] Interactive Self-Attentive Siamese Network for Biomedical Sentence Similarity
    Li, Zhengguang
    Lin, Hongfei
    Zheng, Wei
    Tadesse, Michael M.
    Yang, Zhihao
    Wang, Jian
    IEEE ACCESS, 2020, 8 (08): : 84093 - 84104
  • [25] Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics
    Ng, Tiong-Sik
    Chai, Jacky Chen Long
    Low, Cheng-Yaw
    Teoh, Andrew Beng Jin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3251 - 3264
  • [26] Denoising Self-Attentive Sequential Recommendation
    Chen, Huiyuan
    Lin, Yusan
    Pan, Menghai
    Wang, Lan
    Yeh, Chin-Chia Michael
    Li, Xiaoting
    Zheng, Yan
    Wang, Fei
    Yang, Hao
    PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 92 - 101
  • [27] SAED: self-attentive energy disaggregation
    Virtsionis-Gkalinikis, Nikolaos
    Nalmpantis, Christoforos
    Vrakas, Dimitris
    MACHINE LEARNING, 2023, 112 (11) : 4081 - 4100
  • [28] SAED: self-attentive energy disaggregation
    Nikolaos Virtsionis-Gkalinikis
    Christoforos Nalmpantis
    Dimitris Vrakas
    Machine Learning, 2023, 112 : 4081 - 4100
  • [29] AUTOMATED ANALYSIS OF CHANGES IN PRIVACY POLICIES A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING APPROACH
    Lin, Fangyu
    Samtani, Sagar
    Zhu, Hongyi
    Brandimarte, Laura
    Chen, Hsinchun
    MIS QUARTERLY, 2024, 48 (04) : 1453 - 1482
  • [30] Self-attentive Pyramid Network for Single Image De-raining
    Guo, Taian
    Dai, Tao
    Li, Jiawei
    Xia, Shu-Tao
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT I, 2019, 11953 : 390 - 401