Interactive Contrastive Learning for Self-Supervised Entity Alignment

被引:16
|
作者
Zeng, Kaisheng [1 ]
Dong, Zhenhao [2 ]
Hou, Lei [3 ]
Cao, Yixin [4 ]
Hu, Minghao [5 ]
Yu, Jifan [1 ]
Lv, Xin [1 ]
Cao, Lei [1 ]
Wang, Xin [1 ]
Liu, Haozhuang [1 ]
Huang, Yi [6 ]
Feng, Junlan [6 ]
Wan, Jing [2 ]
Li, Juanzi [7 ]
Feng, Ling [7 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Beijing Univ Chem Technol, Beijing, Peoples R China
[3] Tsinghua, BNRist, Dept Comp Sci & Technol, Beijing, Peoples R China
[4] Singapore Management Univ, Singapore, Singapore
[5] Informat Res Ctr Mil Sci, Beijing, Peoples R China
[6] China Mobile Res Inst, Beijing, Peoples R China
[7] Tsinghua Univ, BNRist, CST, Beijing, Peoples R China
关键词
Knowledge Graph; Entity Alignment; Self-Supervised Learning; Contrastive Learning;
D O I
10.1145/3511808.3557364
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without the use of pre-aligned entity pairs. The current state-of-the-art (SOTA) self-supervised EA approach draws inspiration from contrastive learning, originally designed in computer vision based on instance discrimination and contrastive loss, and suffers from two shortcomings. Firstly, it puts unidirectional emphasis on pushing sampled negative entities far away rather than pulling positively aligned pairs close, as is done in the well-established supervised EA. Secondly, it advocates the minimum information requirement for self-supervised EA, while we argue that self-described KG's side information (e.g., entity name, relation name, entity description) shall preferably be explored to the maximum extent for the self-supervised EA task. In this work, we propose an interactive contrastive learning model for self-supervised EA. It conducts bidirectional contrastive learning via building pseudo-aligned entity pairs as pivots to achieve direct cross-KG information interaction. It further exploits the integration of entity textual and structural information and elaborately designs encoders for better utilization in the self-supervised setting. Experimental results show that our approach outperforms the previous best self-supervised method by a large margin (over 9% Hits@1 absolute improvement on average) and performs on par with previous SOTA supervised counterparts, demonstrating the effectiveness of the interactive contrastive learning for self-supervised EA. The code and data are available at https://github.com/THU-KEG/ICLEA.
引用
收藏
页码:2465 / 2475
页数:11
相关论文
共 50 条
  • [41] CONTRASTIVE SEPARATIVE CODING FOR SELF-SUPERVISED REPRESENTATION LEARNING
    Wang, Jun
    Lam, Max W. Y.
    Su, Dan
    Yu, Dong
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3865 - 3869
  • [42] Grouped Contrastive Learning of Self-Supervised Sentence Representation
    Wang, Qian
    Zhang, Weiqi
    Lei, Tianyi
    Peng, Dezhong
    APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [43] Contrastive Self-Supervised Learning for Optical Music Recognition
    Penarrubia, Carlos
    Valero-Mas, Jose J.
    Calvo-Zaragoza, Jorge
    DOCUMENT ANALYSIS SYSTEMS, DAS 2024, 2024, 14994 : 312 - 326
  • [44] Contrastive self-supervised learning for neurodegenerative disorder classification
    Gryshchuk, Vadym
    Singh, Devesh
    Teipel, Stefan
    Dyrba, Martin
    ADNI Study Grp
    AIBL Study Grp
    FTLDNI Study Grp
    FRONTIERS IN NEUROINFORMATICS, 2025, 19
  • [45] Self-Supervised Learning for Alignment of Objects and Sound
    Liu, Xinzhu
    Liu, Xiaoyu
    Guo, Di
    Liu, Huaping
    Sun, Fuchun
    Min, Haibo
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 1588 - 1594
  • [46] Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning
    Qiu, Shuang
    Wang, Lingxiao
    Bai, Chenjia
    Yang, Zhuoran
    Wang, Zhaoran
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [47] Similarity contrastive estimation for image and video soft contrastive self-supervised learning
    Denize, Julien
    Rabarisoa, Jaonary
    Orcesi, Astrid
    Herault, Romain
    MACHINE VISION AND APPLICATIONS, 2023, 34 (06)
  • [48] Investigating Contrastive Pair Learning's Frontiers in Supervised, Semisupervised, and Self-Supervised Learning
    Sabiri, Bihi
    Khtira, Amal
    EL Asri, Bouchra
    Rhanoui, Maryem
    JOURNAL OF IMAGING, 2024, 10 (08)
  • [49] Similarity contrastive estimation for image and video soft contrastive self-supervised learning
    Julien Denize
    Jaonary Rabarisoa
    Astrid Orcesi
    Romain Hérault
    Machine Vision and Applications, 2023, 34
  • [50] FundusNet, A self-supervised contrastive learning framework for Fundus Feature Learning
    Mojab, Nooshin
    Alam, Minhaj
    Hallak, Joelle
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)