ESLM: Improving Entity Summarization by Leveraging Language Models

被引:0
|
作者
Firmansyah, Asep Fajar [1 ,2 ]
Moussallem, Diego [1 ,3 ]
Ngomo, Axel-Cyrille Ngonga [1 ]
机构
[1] Paderborn Univ, Warburger Str 100, D-33098 Paderborn, Germany
[2] State Islamic Univ Syarif Hidayatullah Jakarta, Jakarta, Indonesia
[3] Jusbrasil, Salvador, BA, Brazil
来源
SEMANTIC WEB, PT I, ESWC 2024 | 2024年 / 14664卷
关键词
Entity Summarization; Language Models; Knowledge Graph Embeddings;
D O I
10.1007/978-3-031-60626-7_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Entity summarizers for knowledge graphs are crucial in various applications. Achieving high performance on the task of entity summarization is hence critical for many applications based on knowledge graphs. The currently best performing approaches integrate knowledge graphs with text embeddings to encode entity-related triples. However, these approaches still rely on static word embeddings that cannot cover multiple contexts. We hypothesize that incorporating contextual language models into entity summarizers can further improve their performance. We hence propose ESLM (Entity Summarization using Language Models), an approach for enhancing the performance of entity summarization that integrates contextual language models along with knowledge graph embeddings. We evaluate our models on the datasets DBpedia and LinkedMDB from ESBM version 1.2, and on the FACES dataset. In our experiments, ESLM achieves an F-measure of up to 0.591 and outperforms state-of-the-art approaches in four out of six experimental settings with respect to the F-measure. In addition, ESLM outperforms state-of-the-art models in all experimental settings when evaluated using the NDCG metric. Moreover, contextual language models notably enhance the performance of our entity summarization model, especially when combined with knowledge graph embeddings. We observed a notable boost in our model's efficiency on DBpedia and FACES. Our approach and the code to rerun our experiments are available at https://github.com/dice-group/ESLM.
引用
收藏
页码:162 / 179
页数:18
相关论文
共 50 条
  • [21] Leveraging Pretrained Models for Automatic Summarization of Doctor-Patient Conversations
    Zhang, Longxiang
    Negrinho, Renato
    Ghosh, Arindam
    Jagannathan, Vasudevan
    Hassanzadeh, Hamid Reza
    Schaaf, Thomas
    Gormley, Matthew R.
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3693 - 3712
  • [22] ELES: Combining Entity Linking and Entity Summarization
    Thalhammer, Andreas
    Rettinger, Achim
    WEB ENGINEERING (ICWE 2016), 2016, 9671 : 547 - 550
  • [23] Improving Legal Document Summarization Using Graphical Models
    Saravanan, M.
    Ravindran, B.
    Raman, S.
    LEGAL KNOWLEDGE AND INFORMATION SYSTEMS, 2006, 152 : 51 - 60
  • [24] On Improving Summarization Factual Consistency from Natural Language Feedback
    Liu, Yixin
    Deb, Budhaditya
    Teruel, Milagro
    Halfaker, Aaron
    Radev, Dragomir
    Awadallah, Ahmed H.
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 15144 - 15161
  • [25] Evaluating large language models on medical evidence summarization
    Tang, Liyan
    Sun, Zhaoyi
    Idnay, Betina
    Nestor, Jordan G.
    Soroush, Ali
    Elias, Pierre A.
    Xu, Ziyang
    Ding, Ying
    Durrett, Greg
    Rousseau, Justin F.
    Weng, Chunhua
    Peng, Yifan
    NPJ DIGITAL MEDICINE, 2023, 6 (01)
  • [26] Evaluating large language models on medical evidence summarization
    Liyan Tang
    Zhaoyi Sun
    Betina Idnay
    Jordan G. Nestor
    Ali Soroush
    Pierre A. Elias
    Ziyang Xu
    Ying Ding
    Greg Durrett
    Justin F. Rousseau
    Chunhua Weng
    Yifan Peng
    npj Digital Medicine, 6
  • [27] Leveraging Pretrained Language Models for Enhanced Entity Matching: A Comprehensive Study of Fine-Tuning and Prompt Learning Paradigms
    Wang, Yu
    Zhou, Luyao
    Wang, Yuan
    Peng, Zhenwan
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2024, 2024
  • [28] Leveraging large language models for predictive chemistry
    Kevin Maik Jablonka
    Philippe Schwaller
    Andres Ortega-Guerrero
    Berend Smit
    Nature Machine Intelligence, 2024, 6 : 161 - 169
  • [29] Leveraging Ontological Knowledge for Neural Language Models
    Deshpande, Ameet
    Jegadeesan, Monisha
    PROCEEDINGS OF THE 6TH ACM IKDD CODS AND 24TH COMAD, 2019, : 350 - 353
  • [30] Leveraging Large Language Models for Sequential Recommendation
    Harte, Jesse
    Zorgdrager, Wouter
    Louridas, Panos
    Katsifodimos, Asterios
    Jannach, Dietmar
    Fragkoulis, Marios
    PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 1096 - 1102