Table Meets LLM: Can Large Language Models Understand Structured Table Data? A Benchmark and Empirical Study

被引:1
|
作者
Sui, Yuan [1 ,4 ]
Zhou, Mengyu [2 ]
Zhou, Mingjie [3 ,4 ]
Han, Shi [2 ]
Zhang, Dongmei [2 ]
机构
[1] Natl Univ Singapore, Singapore, Singapore
[2] Microsoft, Beijing, Peoples R China
[3] Univ Hong Kong, Hong Kong, Peoples R China
[4] Microsoft Res Asia, Beijing, Peoples R China
关键词
large language models; semi-structured data; structural understanding capabilities; benchmark;
D O I
10.1145/3616855.3635752
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) are becoming attractive as few-shot reasoners to solve Natural Language (NL)-related tasks. However, there is still much to learn about how well LLMs understand structured data, such as tables. Although tables can be used as input to LLMs with serialization, there is a lack of comprehensive studies that examine whether LLMs can truly comprehend such data. In this paper, we try to understand this by designing a benchmark to evaluate the structural understanding capabilities (SUC) of LLMs. The benchmark we create includes seven tasks, each with its own unique challenges, e.g., cell lookup, row retrieval, and size detection. We perform a series of evaluations on GPT-3.5 and GPT-4. We find that performance varied depending on several input choices, including table input format, content order, role prompting, and partition marks. Drawing from the insights gained through the benchmark evaluations, we propose self-augmentation for effective structural prompting, such as critical value / range identification using internal knowledge of LLMs. When combined with carefully chosen input choices, these structural prompting methods lead to promising improvements in LLM performance on a variety of tabular tasks, e.g., TabFact(. 2.31%), HybridQA(. 2.13%), SQA(. 2.72%), Feverous(. 0.84%), and ToTTo(. 5.68%). We believe that our open source1 benchmark and proposed prompting methods can serve as a simple yet generic selection for future research.
引用
收藏
页码:645 / 654
页数:10
相关论文
共 50 条
  • [31] An Empirical Study on How Large Language Models Impact Software Testing Learning
    Mezzaro, Simone
    Gambi, Alessio
    Fraser, Gordon
    PROCEEDINGS OF 2024 28TH INTERNATION CONFERENCE ON EVALUATION AND ASSESSMENT IN SOFTWARE ENGINEERING, EASE 2024, 2024, : 555 - 564
  • [32] Can publicly available models understand NATO language? A named entity recognition case study
    D'Ercole, Riccardo
    Kok, Arvid
    Eles, Philip
    Valiyev, Giavid
    Street, Michael
    2023 INTERNATIONAL CONFERENCE ON MILITARY COMMUNICATIONS AND INFORMATION SYSTEMS, ICMCIS, 2023,
  • [33] Data Set and Benchmark (MedGPTEval) to Evaluate Responses From Large Language Models in Medicine: Evaluation Development and Validation
    Xu, Jie
    Lu, Lu
    Peng, Xinwei
    Pang, Jiali
    Ding, Jinru
    Yang, Lingrui
    Song, Huan
    Li, Kang
    Sun, Xin
    Zhang, Shaoting
    JMIR MEDICAL INFORMATICS, 2024, 12
  • [34] Redefining Health Care Data Interoperability: Empirical Exploration of Large Language Models in Information Exchange
    Yoon, Dukyong
    Han, Changho
    Kim, Dong Won
    Kim, Songsoo
    Bae, Sunga
    Ryu, Jee An
    Choi, Yujin
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26
  • [35] Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT
    Dai, Wei
    Lin, Jionghao
    Jin, Hua
    Li, Tongguang
    Tsai, Yi-Shan
    Gasevic, Dragan
    Chen, Guanliang
    2023 IEEE INTERNATIONAL CONFERENCE ON ADVANCED LEARNING TECHNOLOGIES, ICALT, 2023, : 323 - 325
  • [36] Comparative evaluation and performance of large language models on expert level critical care questions: a benchmark study
    Jessica D. Workum
    Bas W. S. Volkers
    Davy van de Sande
    Sumesh Arora
    Marco Goeijenbier
    Diederik Gommers
    Michel E. van Genderen
    Critical Care, 29 (1):
  • [37] Creativity Support in the Age of Large Language Models: An Empirical Study Involving Professional Writers
    Chakrabarty, Tuhin
    Padmakumar, Vishakh
    Brahman, Faeze
    Muresan, Smaranda
    PROCEEDINGS OF THE 16TH CONFERENCE ON CREATIVITY AND COGNITION, C&C 2024, 2024, : 132 - 155
  • [38] Balancing Security and Correctness in Code Generation: An Empirical Study on Commercial Large Language Models
    Black, Gavin S.
    Rimal, Bhaskar P.
    Vaidyan, Varghese Mathew
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024,
  • [39] An Empirical Analysis and Resource Footprint Study of Deploying Large Language Models on Edge Devices
    Dhar, Nobel
    Deng, Bobin
    Lo, Dan
    Wu, Xiaofeng
    Zhao, Liang
    Suo, Kun
    PROCEEDINGS OF THE 2024 ACM SOUTHEAST CONFERENCE, ACMSE 2024, 2024, : 69 - 76
  • [40] The interaction of structured data using openEHR and large Language models for clinical decision support in prostate cancer
    Philippe Kaiser
    Shan Yang
    Michael Bach
    Christian Breit
    Kirsten Mertz
    Bram Stieltjes
    Jan Ebbing
    Christian Wetterauer
    Maurice Henkel
    World Journal of Urology, 43 (1)