Siamese Pre-Trained Transformer Encoder for Knowledge Base Completion

被引:0
|
作者
Mengyao Li
Bo Wang
Jing Jiang
机构
[1] University of Technology Sydney,Australian AI Institute, School of Computer Science, FEIT
[2] Jilin University,School of Artificial Intelligence
来源
Neural Processing Letters | 2021年 / 53卷
关键词
Knowledge base completion; Pre-trained transformer encoder; Siamese network;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper, we aim at leveraging a Siamese textual encoder to efficiently and effectively tackle knowledge base completion problem. Traditional graph embedding-based methods straightforwardly learn the embeddings by considering a knowledge base’s structure but are inherently vulnerable to the graph’s sparsity or incompleteness issue. In contrast, previous textual encoding-based methods capture such structured knowledge from a semantic perspective and employ deep neural textual encoder to model graph triples in semantic space, but they fail to trade off the contextual features with model’s efficiency. Therefore, in this paper we propose a Siamese textual encoder operating on each graph triple from the knowledge base, where the contextual features between a head/tail entity and a relation are well-captured to highlight relation-aware entity embedding while a Siamese structure is also adapted to avoid combinatorial explosion during inference. In the experiments, the proposed method reaches state-of-the-art or comparable performance on several link prediction datasets. Further analyses demonstrate that the proposed method is much more efficient than its baseline with similar evaluating results.
引用
收藏
页码:4143 / 4158
页数:15
相关论文
共 50 条
  • [1] Siamese Pre-Trained Transformer Encoder for Knowledge Base Completion
    Li, Mengyao
    Wang, Bo
    Jiang, Jing
    [J]. NEURAL PROCESSING LETTERS, 2021, 53 (06) : 4143 - 4158
  • [2] Commonsense Knowledge Base Completion with Relational Graph Attention Network and Pre-trained Language Model
    Ju, Jinghao
    Yang, Deqing
    Liu, Jingping
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4104 - 4108
  • [3] Knowledge-Infused Pre-trained Models for KG Completion
    Yu, Han
    Jiang, Rong
    Zhou, Bin
    Li, Aiping
    [J]. WEB INFORMATION SYSTEMS ENGINEERING, WISE 2020, PT I, 2020, 12342 : 273 - 285
  • [4] Pre-trained Language Model with Prompts for Temporal Knowledge Graph Completion
    Xu, Wenjie
    Liu, Ben
    Peng, Miao
    Jia, Xu
    Peng, Min
    [J]. arXiv, 2023,
  • [5] Pre-Trained Image Processing Transformer
    Chen, Hanting
    Wang, Yunhe
    Guo, Tianyu
    Xu, Chang
    Deng, Yiping
    Liu, Zhenhua
    Ma, Siwei
    Xu, Chunjing
    Xu, Chao
    Gao, Wen
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12294 - 12305
  • [6] Adder Encoder for Pre-trained Language Model
    Ding, Jianbang
    Zhang, Suiyun
    Li, Linlin
    [J]. CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 339 - 347
  • [7] Integrally Migrating Pre-trained Transformer Encoder-decoders for Visual Object Detection
    Liu, Feng
    Zhang, Xiaosong
    Peng, Zhiliang
    Guo, Zonghao
    Wan, Fang
    Ji, Xiangyang
    Ye, Qixiang
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 6802 - 6811
  • [8] SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models
    Wang, Liang
    Zhao, Wei
    Wei, Zhuoyu
    Liu, Jingming
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4281 - 4294
  • [9] A Multi-layer Bidirectional Transformer Encoder for Pre-trained Word Embedding: A Survey of BERT
    Kaliyar, Rohit Kumar
    [J]. PROCEEDINGS OF THE CONFLUENCE 2020: 10TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING, DATA SCIENCE & ENGINEERING, 2020, : 336 - 340
  • [10] Knowledge Base Grounded Pre-trained Language Models via Distillation
    Sourty, Raphael
    Moreno, Jose G.
    Servant, Francois-Paul
    Tamine, Lynda
    [J]. 39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1617 - 1625