Triplet extraction leveraging sentence transformers and dependency parsing

被引:0
|
作者
Ottersen, Stuart Gallina [1 ]
Pinheiro, Flavio [1 ]
Bacao, Fernando [1 ]
机构
[1] NOVA IMS, Campus Campolide, P-1070312 Lisbon, Portugal
关键词
Triplet extraction; NLP; Natural language processing; Knowledge Graph; ENTITY;
D O I
10.1016/j.array.2023.100334
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Knowledge Graphs are a tool to structure (entity, relation, entity) triples. One possible way to construct these knowledge graphs is by extracting triples from unstructured text. The aim when doing this is to maximise the number of useful triples while minimising the triples containing no or useless information. Most previous work in this field uses supervised learning techniques that can be expensive both computationally and in that they require labelled data. While the existing unsupervised methods often produce an excessive amount of triples with low value, base themselves on empirical rules when extracting triples or struggle with the order of the entities relative to the relation. To address these issues this paper suggests a new model: Unsupervised Dependency parsing Aided Semantic Triple Extraction (UDASTE) that leverages sentence structure and allows defining restrictive triple relation types to generate high-quality triples while removing the need for mapping extracted triples to relation schemas. This is done by leveraging pre-trained language models. UDASTE is compared with two baseline models on three datasets. UDASTE outperforms the baselines on all three datasets. Its limitations and possible further work are discussed in addition to the implementation of the model in a computational intelligence context.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Attention-Based Belief or Disbelief Feature Extraction for Dependency Parsing
    Peng, Haoyuan
    Liu, Lu
    Zhou, Yi
    Zhou, Junying
    Zheng, Xiaoqing
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 5382 - 5389
  • [22] Entity-Relation Extraction as Full Shallow Semantic Dependency Parsing
    Jiang, Shu
    Li, Zuchao
    Zhao, Hai
    Ding, Weiping
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 1088 - 1099
  • [23] Biomedical Event Extraction Using Convolutional Neural Networks and Dependency Parsing
    Bjorne, Jari
    Salakoski, Tapio
    [J]. SIGBIOMED WORKSHOP ON BIOMEDICAL NATURAL LANGUAGE PROCESSING (BIONLP 2018), 2018, : 98 - 108
  • [24] Task-Oriented Evaluation of Dependency Parsing with Open Information Extraction
    Gamallo, Pablo
    Garcia, Marcos
    [J]. COMPUTATIONAL PROCESSING OF THE PORTUGUESE LANGUAGE, PROPOR 2018, 2018, 11122 : 77 - 82
  • [25] Research on Methods of Microblogging Sentiment Feature Extraction Based on Dependency Parsing
    Li Yonggan
    Zhou Xueguang
    Guo Wei
    Zhang Huanguo
    [J]. PROCEEDINGS OF THE 2015 INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS RESEARCH AND MECHATRONICS ENGINEERING, 2015, 121 : 581 - 589
  • [26] Spatial Dependency Parsing for Semi-Structured Document Information Extraction
    Hwang, Wonseok
    Yim, Jinyeong
    Park, Seunghyun
    Yang, Sohee
    Seo, Minjoon
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 330 - 343
  • [27] Leveraging Syntactic Dependency and Lexical Similarity for Neural Relation Extraction
    Wang, Yashen
    [J]. WEB AND BIG DATA, APWEB-WAIM 2021, PT I, 2021, 12858 : 285 - 299
  • [28] Generation as dependency parsing
    Koller, A
    Striegnitz, K
    [J]. 40TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, PROCEEDINGS OF THE CONFERENCE, 2002, : 17 - 24
  • [29] Inductive dependency parsing
    Samuelsson, Christer
    [J]. COMPUTATIONAL LINGUISTICS, 2007, 33 (02) : 267 - 269
  • [30] UNDIRECTED DEPENDENCY PARSING
    Gomez-Rodriguez, Carlos
    Fernandez-Gonzalez, Daniel
    Darriba Bilbao, Victor Manuel
    [J]. COMPUTATIONAL INTELLIGENCE, 2015, 31 (02) : 348 - 384