Learning embeddings for multiplex networks using triplet loss

被引:0
|
作者
Seyedsaeed Hajiseyedjavadi
Yu-Ru Lin
Konstantinos Pelechrinis
机构
[1] University of Pittsburgh,
来源
关键词
Multiplex network; Network embedding; Triplet loss;
D O I
暂无
中图分类号
学科分类号
摘要
Learning low-dimensional representations of graphs has facilitated the use of traditional machine learning techniques to solving classic network analysis tasks such as link prediction, node classification, community detection, etc. However, to date, the vast majority of these learning tasks are focused on traditional single-layer/unimodal networks and largely ignore the case of multiplex networks. A multiplex network is a suitable structure to model multi-dimensional real-world complex systems. It consists of multiple layers where each layer represents a different relationship among the network nodes. In this work, we propose MUNEM, a novel approach for learning a low-dimensional representation of a multiplex network using a triplet loss objective function. In our approach, we preserve the global structure of each layer, while at the same time fusing knowledge among different layers during the learning process. We evaluate the effectiveness of our proposed method by testing and comparing on real-world multiplex networks from different domains, such as collaboration network, protein-protein interaction network, online social network. Finally, in order to deliberately examine the effect of our model’s parameters we conduct extensive experiments on synthetic multiplex networks.
引用
收藏
相关论文
共 50 条
  • [31] Weaving Layers of Learning: Multiplex Learning Networks in the Workplace
    Yoo, Sangok
    Turner, John
    Nimon, Kim
    Adepoju, Bisola
    [J]. HUMAN RESOURCE DEVELOPMENT REVIEW, 2024, 23 (01) : 121 - 146
  • [32] Task Embeddings: Learning Query Embeddings using Task Context
    Mehrotra, Rishabh
    Yilmaz, Emine
    [J]. CIKM'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2017, : 2199 - 2202
  • [33] An Unsupervised Neural Prediction Framework for Learning Speaker Embeddings using Recurrent Neural Networks
    Jati, Arindam
    Georgiou, Panayiotis
    [J]. 19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 1131 - 1135
  • [34] Different triplet sampling techniques for lossless triplet loss on metric similarity learning
    Kertesz, Gabor
    [J]. 2021 IEEE 19TH WORLD SYMPOSIUM ON APPLIED MACHINE INTELLIGENCE AND INFORMATICS (SAMI 2021), 2021, : 449 - 453
  • [35] HiWalk: Learning node embeddings from heterogeneous networks
    Bai, Jie
    Li, Linjing
    Zeng, Daniel
    [J]. INFORMATION SYSTEMS, 2019, 81 (82-91) : 82 - 91
  • [36] LEARNING CONVOLUTIONAL NEURAL NETWORKS WITH DEEP PART EMBEDDINGS
    Gupta, Nitin
    Mujumdar, Shashank
    Agarwal, Prerna
    Jain, Abhinav
    Mehta, Sameep
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2037 - 2041
  • [37] Fusion Strategies for Learning User Embeddings with Neural Networks
    Blandfort, Philipp
    Karayil, Tushar
    Raue, Federico
    Hees, Joern
    Dengel, Andreas
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [38] FILDNE: A Framework for Incremental Learning of Dynamic Networks Embeddings
    Bielak, Piotr
    Tagowski, Kamil
    Falkiewicz, Maciej
    Kajdanowicz, Tomasz
    Chawla, Nitesh V.
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 236
  • [39] Learning Local Image Descriptors with Deep Siamese and Triplet Convolutional Networks by Minimizing Global Loss Functions
    Kumar, Vijay B. G.
    Carneiro, Gustavo
    Reid, Ian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5385 - 5394
  • [40] A Temporal Coherence Loss Function for Learning Unsupervised Acoustic Embeddings
    Synnaeve, Gabriel
    Dupoux, Emmanuel
    [J]. SLTU-2016 5TH WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGIES FOR UNDER-RESOURCED LANGUAGES, 2016, 81 : 95 - 100