Commonsense Knowledge Base Completion with Relational Graph Attention Network and Pre-trained Language Model

被引:11
|
作者
Ju, Jinghao [1 ]
Yang, Deqing [1 ]
Liu, Jingping [2 ]
机构
[1] Fudan Univ, Sch Data Sci, Shanghai, Peoples R China
[2] East China Univ Sci & Technol, Sch Informat Sci & Engn, Shanghai, Peoples R China
关键词
commonsense knowledge base completion; relational graph attention; pre-trained language model;
D O I
10.1145/3511808.3557564
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many commonsense knowledge graphs (CKGs) still suffer from incompleteness although they have been applied in many natural language processing tasks successfully. Due to the scale and sparsity of CKGs, existing knowledge base completion models are not still competent for CKGs. In this paper, we propose a commonsense knowledge base completion (CKBC) model which learns the structural representations and contextual representations of CKG nodes and relations, respectively by a relational graph attention network and a pre-trained language model. Based on these two types of representations, the scoring decoder in our model achieves a more accurate prediction for a given triple. Our empirical studies on the representative CKG ConceptNet demonstrate our model's superiority over the state-of-the-art CKBC models.
引用
收藏
页码:4104 / 4108
页数:5
相关论文
共 50 条
  • [1] Pre-trained Language Model with Prompts for Temporal Knowledge Graph Completion
    Xu, Wenjie
    Liu, Ben
    Peng, Miao
    Jia, Xu
    Peng, Min
    [J]. arXiv, 2023,
  • [2] SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models
    Wang, Liang
    Zhao, Wei
    Wei, Zhuoyu
    Liu, Jingming
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4281 - 4294
  • [3] MEM-KGC: Masked Entity Model for Knowledge Graph Completion With Pre-Trained Language Model
    Choi, Bonggeun
    Jang, Daesik
    Ko, Youngjoong
    [J]. IEEE ACCESS, 2021, 9 : 132025 - 132032
  • [4] Knowledge Graph Completion Using a Pre-Trained Language Model Based on Categorical Information and Multi-Layer Residual Attention
    Rao, Qiang
    Wang, Tiejun
    Guo, Xiaoran
    Wang, Kaijie
    Yan, Yue
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (11):
  • [5] Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey
    Bhargava, Prajjwal
    Ng, Vincent
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 12317 - 12325
  • [6] Siamese Pre-Trained Transformer Encoder for Knowledge Base Completion
    Li, Mengyao
    Wang, Bo
    Jiang, Jing
    [J]. NEURAL PROCESSING LETTERS, 2021, 53 (06) : 4143 - 4158
  • [7] Siamese Pre-Trained Transformer Encoder for Knowledge Base Completion
    Mengyao Li
    Bo Wang
    Jing Jiang
    [J]. Neural Processing Letters, 2021, 53 : 4143 - 4158
  • [8] Evaluating Commonsense in Pre-Trained Language Models
    Zhou, Xuhui
    Zhang, Yue
    Cui, Leyang
    Huang, Dandan
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9733 - 9740
  • [9] NMT Enhancement based on Knowledge Graph Mining with Pre-trained Language Model
    Yang, Hao
    Qin, Ying
    Deng, Yao
    Wang, Minghan
    [J]. 2020 22ND INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): DIGITAL SECURITY GLOBAL AGENDA FOR SAFE SOCIETY!, 2020, : 185 - 189
  • [10] Relational Graph Neural Network with Hierarchical Attention for Knowledge Graph Completion
    Zhang, Zhao
    Zhuang, Fuzhen
    Zhu, Hengshu
    Shi, Zhiping
    Xiong, Hui
    He, Qing
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9612 - 9619