Large Language Models are Complex Table Parsers

被引:0
|
作者
Zhao, Bowen [1 ]
Ji, Changkai [2 ]
Zhang, Yuejie [1 ]
He, Wen [3 ]
Wang, Yingwen [3 ]
Wang, Qing [3 ]
Feng, Rui [1 ,2 ,3 ]
Zhang, Xiaobo [3 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 200433, Peoples R China
[2] Fudan Univ, Acad Engn & Technol, Shanghai, Peoples R China
[3] Fudan Univ, Natl Childrens Med Ctr, Childrens Hosp, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the Generative Pre-trained Transformer 3.5 (GPT-3.5) exhibiting remarkable reasoning and comprehension abilities in Natural Language Processing (NLP), most Question Answering (QA) research has primarily centered around general QA tasks based on GPT, neglecting the specific challenges posed by Complex Table QA. In this paper, we propose to incorporate GPT-3.5 to address such challenges, in which complex tables are reconstructed into tuples and specific prompt designs are employed for dialogues. Specifically, we encode each cell's hierarchical structure, position information, and content as a tuple. By enhancing the prompt template with an explanatory description of the meaning of each tuple and the logical reasoning process of the task, we effectively improve the hierarchical structure awareness capability of GPT-3.5 to better parse the complex tables. Extensive experiments and results on Complex Table QA datasets, i.e., the open-domain dataset HiTAB and the aviation domain dataset AIT-QA show that our approach significantly outperforms previous work on both datasets, leading to state-of-the-art (SOTA) performance.
引用
收藏
页码:14786 / 14802
页数:17
相关论文
共 50 条
  • [1] Bootstrapping Multilingual Semantic Parsers using Large Language Models
    Awasthi, Abhijeet
    Gupta, Nitish
    Samanta, Bidisha
    Dave, Shachi
    Sarawagi, Sunita
    Talukdar, Partha
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2455 - 2467
  • [2] A survey of table reasoning with large language models
    Zhang, Xuanliang
    Wang, Dingzirui
    Dou, Longxu
    Zhu, Qingfu
    Che, Wanxiang
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (09)
  • [3] Cocoon: Semantic Table Profiling Using Large Language Models
    Huang, Zezhou
    Wu, Eugene
    WORKSHOP ON HUMAN-IN-THE-LOOP DATA ANALYTICS, HILDA 2024, 2024,
  • [4] Large Language Models are few(1)-shot Table Reasoners
    Chen, Wenhu
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1120 - 1130
  • [5] A Powerful Simulation Language for Large and Complex Models
    LIN Jian The Management School
    JournalofSystemsScienceandSystemsEngineering, 1998, (02) : 106 - 114
  • [6] Comparison of Structural Parsers and Neural Language Models as Surprisal Estimators
    Oh, Byung-Doh
    Clark, Christian
    Schuler, William
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [7] Are Large Language Models Table-based Fact-Checkers?
    Zhang, Hanwen
    Si, Qingyi
    Fu, Peng
    Lin, Zheng
    Wang, Weiping
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 3086 - 3091
  • [8] Constrained Language Models Yield Few-Shot Semantic Parsers
    Shin, Richard
    Lin, Christopher H.
    Thomson, Sam
    Chen, Charles
    Roy, Subhro
    Platanios, Emmanouil Antonios
    Pauls, Adam
    Klein, Dan
    Eisner, Jason
    Van Durme, Benjamin
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 7699 - 7715
  • [9] Leveraging Large Language Models for Flexible and Robust Table-to-Text Generation
    Oro, Ermelinda
    De Grandis, Luca
    Granata, Francesco Maria
    Ruffolo, Massimo
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, PT I, DEXA 2024, 2024, 14910 : 222 - 227
  • [10] Contextual Distortion Reveals Constituency: Masked Language Models are Implicit Parsers
    Li, Jiaxi
    Lu, Wei
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 5208 - 5222