Evaluating the Adaptability of Large Language Models for Knowledge-aware Question and Answering

被引:0
|
作者
Thakkar, Jay [1 ]
Kolekar, Suresh [1 ]
Gite, Shilpa [1 ,2 ]
Pradhan, Biswajeet [3 ]
Alamri, Abdullah [4 ]
机构
[1] Symbiosis Int Deemed Univ, Symbiosis Ctr Appl AI SCAAI, Pune 412115, India
[2] Symbiosis Int Deemed Univ, Symbiosis Inst Technol, Artificial Intelligence & Machine Learning Dept, Pune 412115, India
[3] Univ Technol Sydney, Fac Engn & Informat Technol, Ctr Adv Modelling & Geospatial Informat Syst CAMGI, Sch Civil & Environm Engn, Sydney, NSW, Australia
[4] King Saud Univ, Coll Sci, Dept Geol & Geophys, Riyadh, Saudi Arabia
关键词
large language models; abstractive summarization; knowledge-aware summarization; personalized summarization; QUALITY;
D O I
10.2478/ijssis-2024-0021
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Large language models (LLMs) have transformed open-domain abstractive summarization, delivering coherent and precise summaries. However, their adaptability to user knowledge levels is largely unexplored. This study investigates LLMs' efficacy in tailoring summaries to user familiarity. We assess various LLM architectures across different familiarity settings using metrics like linguistic complexity and reading grade levels. Findings expose current capabilities and constraints in knowledge-aware summarization, paving the way for personalized systems. We analyze LLM performance across three familiarity levels: none, basic awareness, and complete familiarity. Utilizing established readability metrics, we gauge summary complexity. Results indicate LLMs can adjust summaries to some extent based on user familiarity. Yet, challenges persist in accurately assessing user knowledge and crafting informative, comprehensible summaries. We highlight areas for enhancement, including improved user knowledge modeling and domain-specific integration. This research informs the advancement of adaptive summarization systems, offering insights for future development.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] Tree -of-Reasoning Question Decomposition for Complex Question Answering with Large Language Models
    Zhang, Kun
    Zeng, Jiali
    Meng, Fandong
    Wang, Yuanzhuo
    Sun, Shiqi
    Bai, Long
    Shen, Huawei
    Zhou, Jie
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 19560 - 19568
  • [42] MedExpQA: Multilingual benchmarking of Large Language Models for Medical Question Answering
    Alonso, Inigo
    Oronoz, Maite
    Agerri, Rodrigo
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 155
  • [43] Chart Question Answering based on Modality Conversion and Large Language Models
    Liu, Yi-Cheng
    Chu, Wei-Ta
    PROCEEDINGS OF THE FIRST ACM WORKSHOP ON AI-POWERED QUESTION ANSWERING SYSTEMS FOR MULTIMEDIA, AIQAM 2024, 2024, : 19 - 24
  • [44] Knowledge-Aware Neural Networks for Medical Forum Question Classification
    Roy, Soumyadeep
    Chakraborty, Sudip
    Mandal, Aishik
    Balde, Gunjan
    Sharma, Prakhar
    Natarajan, Anandhavelu
    Khosla, Megha
    Sural, Shamik
    Ganguly, Niloy
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3398 - 3402
  • [45] Evaluating Intelligence and Knowledge in Large Language Models
    Bianchini, Francesco
    TOPOI-AN INTERNATIONAL REVIEW OF PHILOSOPHY, 2025, 44 (01): : 163 - 173
  • [46] How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering
    Liu, Jinxin
    Cao, Shulin
    Shi, Jiaxin
    Zhang, Tingjian
    Nie, Lunyiu
    Liu, Linmei
    Hou, Lei
    Li, Juanzi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 792 - 815
  • [47] KoSEL: Knowledge subgraph enhanced large language model for medical question answering
    Zeng, Zefan
    Cheng, Qing
    Hu, Xingchen
    Zhuang, Yan
    Liu, Xinwang
    He, Kunlun
    Liu, Zhong
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [48] QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
    Yasunaga, Michihiro
    Ren, Hongyu
    Bosselut, Antoine
    Liang, Percy
    Leskovec, Jure
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 535 - 546
  • [49] JointLK: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering
    Sun, Yueqing
    Shi, Qi
    Qi, Le
    Zhang, Yu
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 5049 - 5060
  • [50] Incorporating Domain Knowledge and Semantic Information into Language Models for Commonsense Question Answering
    Zhou, Ruiying
    Tian, Keke
    Lai, Hanjiang
    Yin, Jian
    PROCEEDINGS OF THE 2021 IEEE 24TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN (CSCWD), 2021, : 1160 - 1165