Knowledge Graph Embedding by Double Limit Scoring Loss

被引:16
|
作者
Zhou, Xiaofei [1 ,2 ]
Niu, Lingfeng [3 ]
Zhu, Qiannan [1 ,2 ]
Zhu, Xingquan [4 ]
Liu, Ping [1 ,2 ]
Tan, Jianlong [1 ,2 ]
Guo, Li [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing 100093, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Chinese Acad Sci, Key Lab Big Data Min & Knowledge Management, Beijing 100190, Peoples R China
[4] Florida Atlantic Univ, Dept Comp & Elect Engn & Comp Sci, Boca Raton, FL 33431 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Semantics; Optimization; Computational modeling; Task analysis; Tensors; Sparse matrices; Predictive models; Knowledge graph; embedding; representation learning; knowledge graph completion; loss function;
D O I
10.1109/TKDE.2021.3060755
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge graph embedding is an effective way to represent knowledge graph, which greatly enhance the performances on knowledge graph completion tasks, e.g., entity or relation prediction. For knowledge graph embedding models, designing a powerful loss framework is crucial to the discrimination between correct and incorrect triplets. Margin-based ranking loss is a commonly used negative sampling framework to make a suitable margin between the scores of positive and negative triples. However, this loss can not ensure ideal low scores for the positive triplets and high scores for the negative triplets, which is not beneficial for knowledge completion tasks. In this paper, we present a double limit scoring loss to separately set upper bound for correct triplets and lower bound for incorrect triplets, which provides more effective and flexible optimization for knowledge graph embedding. Upon the presented loss framework, we present several knowledge graph embedding models including TransE-SS, TransH-SS, TransD-SS, ProjE-SS and ComplEx-SS. The experimental results on link prediction and triplet classification show that our proposed models have the significant improvement compared to state-of-the-art baselines.
引用
下载
收藏
页码:5825 / 5839
页数:15
相关论文
共 50 条
  • [31] DCNS: A Double-Cache Negative Sampling Method for Improving Knowledge Graph Embedding
    Zheng, Hao
    Guan, Donghai
    Xu, Shuai
    Yuan, Weiwei
    WEB AND BIG DATA, PT IV, APWEB-WAIM 2023, 2024, 14334 : 438 - 450
  • [32] A type-augmented knowledge graph embedding framework for knowledge graph completion
    He, Peng
    Zhou, Gang
    Yao, Yao
    Wang, Zhe
    Yang, Hao
    SCIENTIFIC REPORTS, 2023, 13 (01):
  • [33] A type-augmented knowledge graph embedding framework for knowledge graph completion
    Peng He
    Gang Zhou
    Yao Yao
    Zhe Wang
    Hao Yang
    Scientific Reports, 13 (1)
  • [34] Heterogeneous Graph Neural Network with Hypernetworks for Knowledge Graph Embedding
    Liu, Xiyang
    Zhu, Tong
    Tan, Huobin
    Zhang, Richong
    SEMANTIC WEB - ISWC 2022, 2022, 13489 : 284 - 302
  • [35] Learning graph attention-aware knowledge graph embedding
    Li, Chen
    Peng, Xutan
    Niu, Yuhang
    Zhang, Shanghang
    Peng, Hao
    Zhou, Chuan
    Li, Jianxin
    NEUROCOMPUTING, 2021, 461 : 516 - 529
  • [37] Knowledge Graph Embedding via Graph Attenuated Attention Networks
    Wang, Rui
    Li, Bicheng
    Hu, Shengwei
    Du, Wenqian
    Zhang, Min
    IEEE ACCESS, 2020, 8 (5212-5224) : 5212 - 5224
  • [38] DisenKGAT: Knowledge Graph Embedding with Disentangled Graph Attention Network
    Wu, Junkang
    Shi, Wentao
    Cao, Xuezhi
    Chen, Jiawei
    Lei, Wenqiang
    Zhang, Fuzheng
    Wu, Wei
    He, Xiangnan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 2140 - 2149
  • [39] The concept information of graph granule with application to knowledge graph embedding
    Niu, Jiaojiao
    Chen, Degang
    Ma, Yinglong
    Li, Jinhai
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, : 5595 - 5606
  • [40] Knowledge graph question answering based on TE-BiLTM and knowledge graph embedding
    Li, Jianbin
    Qu, Ketong
    Li, Kunchang
    Chen, Zhiqiang
    Fang, Suwan
    Yan, Jingchen
    2021 5TH INTERNATIONAL CONFERENCE ON INNOVATION IN ARTIFICIAL INTELLIGENCE (ICIAI 2021), 2021, : 164 - 169