Entity-Agnostic Representation Learning for Parameter-Efficient Knowledge Graph Embedding

被引:0
|
作者
Chen, Mingyang [1 ]
Zhang, Wen [2 ]
Yao, Zhen [2 ]
Zhu, Yushan [1 ]
Gao, Yang [4 ]
Pan, Jeff Z. [5 ]
Chen, Huajun [1 ,3 ,6 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[2] Zhejiang Univ, Sch Software Technol, Hangzhou, Peoples R China
[3] Donghai Lab, Shanghai, Peoples R China
[4] Huawei Technol Co Ltd, Shenzhen, Peoples R China
[5] Univ Edinburgh, Sch Informat, Edinburgh, Midlothian, Scotland
[6] Alibaba Zhejiang Univ Joint Inst Frontier Technol, Hangzhou, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose an entity-agnostic representation learning method for handling the problem of inefficient parameter storage costs brought by embedding knowledge graphs. Conventional knowledge graph embedding methods map elements in a knowledge graph, including entities and relations, into continuous vector spaces by assigning them one or multiple specific embeddings (i.e., vector representations). Thus the number of embedding parameters increases linearly as the growth of knowledge graphs. In our proposed model, Entity-Agnostic Representation Learning (EARL), we only learn the embeddings for a small set of entities and refer to them as reserved entities. To obtain the embeddings for the full set of entities, we encode their distinguishable information from their connected relations, k-nearest reserved entities, and multi-hop neighbors. We learn universal and entity-agnostic encoders for transforming distinguishable information into entity embeddings. This approach allows our proposed EARL to have a static, efficient, and lower parameter count than conventional knowledge graph embedding methods. Experimental results show that EARL uses fewer parameters and performs better on link prediction tasks than baselines, reflecting its parameter efficiency.
引用
收藏
页码:4182 / 4190
页数:9
相关论文
共 50 条
  • [1] EARL: Informative Knowledge-Grounded Conversation Generation with Entity-Agnostic Representation Learning
    Zhou, Hao
    Huang, Minlie
    Liu, Yong
    Chen, Wei
    Zhu, Xiaoyan
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 2383 - 2395
  • [2] From Wide to Deep: Dimension Lifting Network for Parameter-Efficient Knowledge Graph Embedding
    Cai, Borui
    Xiang, Yong
    Gao, Longxiang
    Wu, Di
    Zhang, He
    Jin, Jiong
    Luan, Tom
    [J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36 (12) : 8341 - 8348
  • [3] AutoETER: Automated Entity Type Representation for Knowledge Graph Embedding
    Niu, Guanglin
    Li, Bo
    Zhang, Yongfei
    Pu, Shiliang
    Li, Jingyang
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1172 - 1181
  • [4] Knowledge Graph Embedding by Learning to Connect Entity with Relation
    Huang, Zichao
    Li, Bo
    Yin, Jian
    [J]. WEB AND BIG DATA (APWEB-WAIM 2018), PT I, 2018, 10987 : 400 - 414
  • [5] Entity and Entity Type Composition Representation Learning for Knowledge Graph Completion
    Ni, Runyu
    Shibata, Hiroki
    Takama, Yasufumi
    [J]. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2023, 27 (06) : 1151 - 1158
  • [6] Entity alignment with adaptive margin learning knowledge graph embedding
    Shen, Linshan
    He, Rongbo
    Huang, Shaobin
    [J]. DATA & KNOWLEDGE ENGINEERING, 2022, 139
  • [7] Joint semantic embedding with structural knowledge and entity description for knowledge representation learning
    Xiao Wei
    Yunong Zhang
    Hao Wang
    [J]. Neural Computing and Applications, 2023, 35 : 3883 - 3902
  • [8] Joint semantic embedding with structural knowledge and entity description for knowledge representation learning
    Wei, Xiao
    Zhang, Yunong
    Wang, Hao
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (05): : 3883 - 3902
  • [9] Temporal Knowledge Graph Entity Alignment via Representation Learning
    Song, Xiuting
    Bai, Luyi
    Liu, Rongke
    Zhang, Han
    [J]. DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT II, 2022, : 391 - 406
  • [10] Parameter-Efficient Transfer Learning for NLP
    Houlsby, Neil
    Giurgiu, Andrei
    Jastrzebski, Stanislaw
    Morrone, Bruna
    de laroussilhe, Quentin
    Gesmundo, Andrea
    Attariyan, Mona
    Gelly, Sylvain
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97