Caps-OWKG: a capsule network model for open-world knowledge graph

被引:18
|
作者
Wang, Yuhan [1 ]
Xiao, Weidong [1 ]
Tan, Zhen [1 ]
Zhao, Xiang [1 ]
机构
[1] Natl Univ Def Technol, Sci & Technol Informat Syst Engn Lab, Changsha 410073, Hunan, Peoples R China
关键词
Capsule network; Open-world knowledge graph; Link prediction; Knowledge graph completion;
D O I
10.1007/s13042-020-01259-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge graphs are typical multi-relational structures, which is consisted of many entities and relations. Nonetheless, existing knowledge graphs are still sparse and far from being complete. To refine the knowledge graphs, representation learning is utilized to embed entities and relations into low-dimensional spaces. Many existing knowledge graphs embedding models focus on learning latent features in close-world assumption but omit the changeable of each knowledge graph.In this paper, we propose a knowledge graph representation learning model, called Caps-OWKG, which leverages the capsule network to capture the both known and unknown triplets features in open-world knowledge graph. It combines the descriptive text and knowledge graph to get descriptive embedding and structural embedding, simultaneously. Then, the both above embeddings are used to calculate the probability of triplet authenticity. We verify the performance of Caps-OWKG on link prediction task with two common datasets FB15k-237-OWE and DBPedia50k. The experimental results are better than other baselines, and achieve the state-of-the-art performance.
引用
收藏
页码:1627 / 1637
页数:11
相关论文
共 50 条
  • [41] Open-world electrocardiogram classification via domain knowledge-driven contrastive learning
    Zhou, Shuang
    Huang, Xiao
    Liu, Ninghao
    Zhang, Wen
    Zhang, Yuan-Ting
    Chung, Fu-Lai
    [J]. NEURAL NETWORKS, 2024, 179
  • [42] TEGTOK: Augmenting Text Generation via Task-specific and Open-world Knowledge
    Tan, Chao-Hong
    Gu, Jia-Chen
    Tao, Chongyang
    Ling, Zhen-Hua
    Xu, Can
    Hu, Huang
    Geng, Xiubo
    Jiang, Daxin
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 1597 - 1609
  • [43] Distilled Reverse Attention Network for Open-world Compositional Zero-Shot Learning
    Li, Yun
    Liu, Zhe
    Jha, Saurav
    Yao, Lina
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1782 - 1791
  • [44] Shooting a metastable object: targeting as trigger for the actor-network in the open-world videogames
    Ahn, Sungyong
    [J]. COMMUNICATION AND CRITICAL-CULTURAL STUDIES, 2018, 15 (03) : 213 - 231
  • [45] KHGCN: Knowledge-Enhanced Recommendation with Hierarchical Graph Capsule Network
    Chen, Fukun
    Yin, Guisheng
    Dong, Yuxin
    Li, Gesu
    Zhang, Weiqi
    [J]. ENTROPY, 2023, 25 (04)
  • [46] CapsRec: A Capsule Graph Neural Network Model for Social Recommendation
    Liu, Peizhen
    Yu, Wen
    [J]. 2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021), 2021, : 359 - 363
  • [47] Towards Open-World Recommendation: An Inductive Model-based Collaborative Filtering Approach
    Wu, Qitian
    Zhang, Hengrui
    Gao, Xiaofeng
    Yan, Junchi
    Zha, Hongyuan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [48] Knowledge-Enhanced Personalized Review Generation with Capsule Graph Neural Network
    Li, Junyi
    Li, Siqing
    Zhao, Wayne Xin
    He, Gaole
    Wei, Zhicheng
    Yuan, Nicholas Jing
    Wen, Ji-Rong
    [J]. CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 735 - 744
  • [49] DuCape: Dual Quaternion and Capsule Network-Based Temporal Knowledge Graph Embedding
    Zhang, Sensen
    Liang, Xun
    Tang, Hui
    Zheng, Xiangping
    Zhang, Alex X.
    Ma, Yuefeng
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2023, 17 (07)
  • [50] Modeling knowledge proficiency using multi-hierarchical capsule graph neural network
    He, Zeyu
    Li, Wang
    Yan, Yonghong
    [J]. APPLIED INTELLIGENCE, 2022, 52 (07) : 7230 - 7247