Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models

被引:0
|
作者
Zhang, Zhiyuan [1 ]
Liu, Xiaoqian [1 ,2 ]
Zhang, Yi [1 ]
Su, Qi [1 ,2 ]
Sun, Xu [1 ]
He, Bin [3 ]
机构
[1] Peking Univ, Sch EECS, MOE Key Lab Computat Linguist, Beijing, Peoples R China
[2] Peking Univ, Sch Foreign Languages, Beijing, Peoples R China
[3] Huawei Noahs Ark Lab, Shenzhen, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conventional knowledge graph embedding (KGE) often suffers from limited knowledge representation, leading to performance degradation especially on the low-resource problem. To remedy this, we propose to enrich knowledge representation via pretrained language models by leveraging world knowledge from pretrained models. Specifically, we present a universal training framework named Pretrain-KGE consisting of three phases: semantic-based fine-tuning phase, knowledge extracting phase and KGE training phase. Extensive experiments show that our proposed Pretrain-KGE can improve results over KGE models, especially on solving the low-resource problem.
引用
收藏
页码:259 / 266
页数:8
相关论文
共 50 条
  • [21] Efficient Equivariant Transfer Learning from Pretrained Models
    Basu, Sourya
    Katdare, Pulkit
    Sattigeri, Prasanna
    Chenthamarakshan, Vijil
    Driggs-Campbell, Katherine
    Das, Payel
    Varshney, Lav R.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [22] Extracting Latent Steering Vectors from Pretrained Language Models
    Subramani, Nishant
    Suresh, Nivedita
    Peters, Matthew E.
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 566 - 581
  • [23] DLAMA: A Framework for Curating Culturally Diverse Facts for Probing the Knowledge of Pretrained Language Models
    Keleg, Amr
    Magdy, Walid
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 6245 - 6266
  • [24] KNOWLEDGE TRANSFER FROM LARGE-SCALE PRETRAINED LANGUAGE MODELS TO END-TO-END SPEECH RECOGNIZERS
    Kubo, Yotaro
    Karita, Shigeki
    Bacchiani, Michiel
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8512 - 8516
  • [25] Few-shot Knowledge Graph-to-Text Generation with Pretrained Language Models
    Li, Junyi
    Tang, Tianyi
    Zhao, Wayne Xin
    Wei, Zhicheng
    Yuan, Nicholas Jing
    Wen, Ji-Rong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 1558 - 1568
  • [26] Knowledge Graph Completion for Power Grid Main Equipment Using Pretrained Language Models
    Lin, Chenxiang
    Zheng, Zhou
    Cai, Shitao
    Fu, Li
    Xie, Wei
    Ma, Teng
    Zhang, Zhihong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT IV, 2023, 14089 : 828 - 838
  • [27] KNOWLEDGE REPRESENTATION IN CONNECTIONIST MODELS OF HUMAN LEARNING
    GRAHAM, DJ
    BULLETIN OF THE PSYCHONOMIC SOCIETY, 1990, 28 (06) : 486 - 486
  • [28] Unsupervised and few-shot parsing from pretrained language models
    Zeng, Zhiyuan
    Xiong, Deyi
    ARTIFICIAL INTELLIGENCE, 2022, 305
  • [29] Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing
    Liu, Qian
    Yang, Dejian
    Zhang, Jiahui
    Guo, Jiaqi
    Zhou, Bin
    Lou, Jian-Guang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 1174 - 1189
  • [30] TRANSLICO: A Contrastive Learning Framework to Address the Script Barrier in Multilingual Pretrained Language Models
    Liu, Yihong
    Ma, Chunlan
    Ye, Haotian
    Schuetze, Hinrich
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 2476 - 2499