Self-supervised end-to-end graph local clustering

被引:0
|
作者
Zhe Yuan
机构
[1] Renmin University of China,The School of Information
来源
World Wide Web | 2023年 / 26卷
关键词
Graph local clustering; Self-supervised learning; Generalized PageRank; Conductance;
D O I
暂无
中图分类号
学科分类号
摘要
Graph clustering is a central and fundamental problem in numerous graph mining applications, especially in spatial-temporal system. The purpose of the graph local clustering is finding a set of nodes (cluster) containing seed node with high internal density. A series of works have been proposed to solve this problem with carefully designing the measuring metric and improving the efficiency-effectiveness trade-off. However, they are unable to provide a satisfying clustering quality guarantee. In this paper, we investigate the graph local clustering task and propose a End-to-End framework LearnedNibble to address the aforementioned limitation. In particular, we propose several techniques, including the practical self-supervised supervision manner with differential soft-mean-sweep operator, effective optimization method with regradient technique, and scalable inference manner with Approximate Graph Propagation (AGP) paradigm and search-selective method. To the best of our knowledge, LearnedNibble is the first attempt to take responsibility for the cluster quality and take both effectiveness and efficiency into consideration in an End-to-End paradigm with self-supervised manner. Extensive experiments on real-world datasets demonstrate the clustering capacity, generalization ability, and approximation compatibility of our LearnedNibble framework.
引用
收藏
页码:1157 / 1179
页数:22
相关论文
共 50 条
  • [1] Self-supervised end-to-end graph local clustering
    Yuan, Zhe
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (03): : 1157 - 1179
  • [2] Self-Supervised Representations Improve End-to-End Speech Translation
    Wu, Anne
    Wang, Changhan
    Pino, Juan
    Gu, Jiatao
    [J]. INTERSPEECH 2020, 2020, : 1491 - 1495
  • [3] Geometric Consistency for Self-Supervised End-to-End Visual Odometry
    Iyer, Ganesh
    Murthy, J. Krishna
    Gupta, Gunshi
    Krishna, K. Madhava
    Paull, Liam
    [J]. PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 380 - 388
  • [4] End-to-end variational graph clustering with local structural preservation
    Lin Guo
    Qun Dai
    [J]. Neural Computing and Applications, 2022, 34 : 3767 - 3782
  • [5] End-to-end variational graph clustering with local structural preservation
    Guo, Lin
    Dai, Qun
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (05): : 3767 - 3782
  • [6] AN EXPLORATION OF SELF-SUPERVISED PRETRAINED REPRESENTATIONS FOR END-TO-END SPEECH RECOGNITION
    Chang, Xuankai
    Maekaku, Takashi
    Guo, Pengcheng
    Shi, Jing
    Lu, Yen-Ju
    Subramanian, Aswin Shanmugam
    Wang, Tianzi
    Yang, Shu-wen
    Tsao, Yu
    Lee, Hung-yi
    Watanabe, Shinji
    [J]. 2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 228 - 235
  • [7] CONTINUAL SELF-SUPERVISED DOMAIN ADAPTATION FOR END-TO-END SPEAKER DIARIZATION
    Coria, Juan M.
    Bredin, Herve
    Ghannay, Sahar
    Rosset, Sophie
    [J]. 2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 626 - 632
  • [8] ActiveStereoNet: End-to-End Self-supervised Learning for Active Stereo Systems
    Zhang, Yinda
    Khamis, Sameh
    Rhemann, Christoph
    Valentin, Julien
    Kowdle, Adarsh
    Tankovich, Vladimir
    Schoenberg, Michael
    Izadi, Shahram
    Funkhouser, Thomas
    Fanello, Sean
    [J]. COMPUTER VISION - ECCV 2018, PT VIII, 2018, 11212 : 802 - 819
  • [9] An End-to-End Contrastive Self-Supervised Learning Framework for Language Understanding
    Fang, Hongchao
    Xie, Pengtao
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 1324 - 1340
  • [10] Investigating Self-supervised Pre-training for End-to-end Speech Translation
    Ha Nguyen
    Bougares, Fethi
    Tomashenko, Natalia
    Esteve, Yannick
    Besacier, Laurent
    [J]. INTERSPEECH 2020, 2020, : 1466 - 1470