Fusing structural information with knowledge enhanced text representation for knowledge graph completion

被引:0
|
作者
Kang Tang
Shasha Li
Jintao Tang
Dong Li
Pancheng Wang
Ting Wang
机构
[1] National University of Defense Technology,College of Computer Science and Technology
来源
关键词
Neural networks; Knowledge graphs; Knowledge graph completion;
D O I
暂无
中图分类号
学科分类号
摘要
Although knowledge graphs store a large number of facts in the form of triplets, they are still limited by incompleteness. Hence, Knowledge Graph Completion (KGC), defined as inferring missing entities or relations based on observed facts, has long been a fundamental issue for various knowledge driven downstream applications. Prevailing KG embedding methods for KGC like TransE rely solely on mining structural information of existing facts, thus failing to handle generalization issue as they are inapplicable to unseen entities. Recently, a series of researches employ pre-trained encoders to learn textual representation for triples i.e., textual-encoding methods. While exhibiting great generalization for unseen entities, they are still inferior compared with above KG embedding based ones. In this paper, we devise a novel textual-encoding learning framework for KGC. To enrich textual prior knowledge for more informative prediction, it features three hierarchical maskings which can utilize far contexts of input text so that textual prior knowledge can be elicited. Besides, to solve predictive ambiguity caused by improper relational modeling, a relational-aware structure learning scheme is applied based on textual embeddings. Extensive experimental results on several popular datasets suggest the effectiveness of our approach even compared with recent state-of-the-arts in this task.
引用
收藏
页码:1316 / 1333
页数:17
相关论文
共 50 条
  • [1] Fusing structural information with knowledge enhanced text representation for knowledge graph completion
    Tang, Kang
    Li, Shasha
    Tang, Jintao
    Li, Dong
    Wang, Pancheng
    Wang, Ting
    DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (03) : 1316 - 1333
  • [2] Convolutional Network Embedding of Text-Enhanced Representation for Knowledge Graph Completion
    Zhao, Feng
    Xu, Tao
    Jin, Langjunqing
    Jin, Hai
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (23) : 16758 - 16769
  • [3] Knowledge Graph Completion with Triple Structure and Text Representation
    Shuang Liu
    YuFeng Qin
    Man Xu
    Simon Kolmanič
    International Journal of Computational Intelligence Systems, 16
  • [4] Knowledge Graph Completion with Triple Structure and Text Representation
    Liu, Shuang
    Qin, YuFeng
    Xu, Man
    Kolmanic, Simon
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2023, 16 (01)
  • [5] A Knowledge Graph Completion Method Based on Fusing Association Information
    Wang, Yuhao
    Zhao, Erping
    Wang, Wei
    IEEE ACCESS, 2022, 10 : 50500 - 50507
  • [6] Text-Graph Enhanced Knowledge Graph Representation Learning
    Hu, Linmei
    Zhang, Mengmei
    Li, Shaohua
    Shi, Jinghan
    Shi, Chuan
    Yang, Cheng
    Liu, Zhiyuan
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2021, 4
  • [7] Fusing graph structural information with pre-trained generative model for knowledge graph-to-text generation
    Shi, Xiayang
    Xia, Zhenlin
    Li, Yinlin
    Wang, Xuhui
    Niu, Yufeng
    KNOWLEDGE AND INFORMATION SYSTEMS, 2025, 67 (03) : 2619 - 2640
  • [8] Knowledge Graph Completion Method of Combining Structural Information with Semantic Information
    Binhao HU
    Jianpeng ZHANG
    Hongchang CHEN
    Chinese Journal of Electronics, 2024, 33 (06) : 1412 - 1420
  • [9] Knowledge Graph Completion Method of Combining Structural Information with Semantic Information
    Hu, Binhao
    Zhang, Jianpeng
    Chen, Hongchang
    CHINESE JOURNAL OF ELECTRONICS, 2024, 33 (06) : 1412 - 1420
  • [10] VEM2L: an easy but effective framework for fusing text and structure knowledge on sparse knowledge graph completion
    He, Tao
    Liu, Ming
    Cao, Yixin
    Qu, Meng
    Zheng, Zihao
    Qin, Bing
    DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (02) : 343 - 371