AutoTQA: Towards Autonomous Tabular Question Answering through Multi-Agent Large Language Models

被引:0
|
作者
Zhu, Jun-Peng [1 ,2 ]
Cai, Peng [1 ]
Xu, Kai [2 ]
Li, Li [2 ]
Sun, Yishen [2 ]
Zhou, Shuai [2 ]
Su, Haihuang [2 ]
Tang, Liu [2 ]
Liu, Qi [2 ]
机构
[1] East China Normal Univ, Shanghai, Peoples R China
[2] PingCAP, Beijing, Peoples R China
来源
PROCEEDINGS OF THE VLDB ENDOWMENT | 2024年 / 17卷 / 12期
基金
中国国家自然科学基金;
关键词
D O I
10.14778/3685800.3685816
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the growing significance of data analysis, several studies aim to provide precise answers to users' natural language questions from tables, a task referred to as tabular question answering (TQA). The state-of-the-art TQA approaches are limited to handling only single-table questions. However, real-world TQA problems are inherently complex and frequently involve multiple tables, which poses challenges in directly extending single-table TQA designs to handle multiple tables, primarily due to the limited extensibility of the majority of single-table TQA methods. This paper proposes AutoTQA, a novel Autonomous Tabular Question Answering framework that employs multi-agent large language models (LLMs) across multiple tables from various systems (e.g., TiDB, BigQuery). AutoTQA comprises five agents: the User, responsible for receiving the user's natural language inquiry; the Planner, tasked with creating an execution plan for the user's inquiry; the Engineer, responsible for executing the plan step-by-step; the Executor, provides various execution environments (e.g., text-to-SQL) to fulfill specific tasks assigned by the Engineer; and the Critic, responsible for judging whether to complete the user's natural language inquiry and identifying gaps between the current results and initial tasks. To facilitate the interaction between different agents, we have also devised agent scheduling algorithms. Furthermore, we have developed LinguFlow, an open-source, low-code visual programming tool, to quickly build and debug LLM-based applications, and to accelerate the creation of various external tools and execution environments. We also implemented a series of data connectors, which allows AutoTQA to access various tables from multiple systems. Extensive experiments show that AutoTQA delivers outstanding performance on four representative datasets.
引用
收藏
页码:3920 / 3933
页数:14
相关论文
共 50 条
  • [1] A multi-agent approach to question answering
    dos Santos, Cassia Trojahn
    Quaresma, Paulo
    Rodrigues, Irene
    Vieira, Renata
    COMPUTATIONAL PROCESSING OF THE PORTUGUESE LANGUAGE, PROCEEDINGS, 2006, 3960 : 131 - 139
  • [2] Program synthesis for multi-agent question answering
    Waldinger, R
    Jarvis, P
    Dungan, J
    VERIFICATION: THEORY AND PRACTICE: ESSAYS DEDICATED TO ZHOAR MANNA ON THE OCCASION OF HIS 64TH BIRTHDAY, 2003, 2772 : 747 - 761
  • [3] Program synthesis for multi-agent question answering
    Waldinger, Richard
    Jarvis, Peter
    Dungan, Jennifer
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2004, 2772 : 747 - 761
  • [4] Enhancing Fake News Detection with Large Language Models Through Multi-agent Debates
    Jeptoo, Korir Nancy
    Su, Chengjie
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT II, NLPCC 2024, 2025, 15360 : 474 - 486
  • [5] Reasoning with large language models for medical question answering
    Lucas, Mary M.
    Yang, Justin
    Pomeroy, Jon K.
    Yang, Christopher C.
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (09)
  • [6] Calibrated Large Language Models for Binary Question Answering
    Giovannotti, Patrizio
    Gammerman, Alex
    13TH SYMPOSIUM ON CONFORMAL AND PROBABILISTIC PREDICTION WITH APPLICATIONS, 2024, 230 : 218 - 235
  • [7] Towards Building a Robust Knowledge Intensive Question Answering Model with Large Language Models
    Hong, Xingyun
    Shao, Yan
    Wang, Zhilin
    Duan, Manni
    Jin, Xiongnan
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT I, NLPCC 2024, 2025, 15359 : 228 - 242
  • [8] Enhancing Biomedical Question Answering with Large Language Models
    Yang, Hua
    Li, Shilong
    Goncalves, Teresa
    INFORMATION, 2024, 15 (08)
  • [9] FanOutQA: A Multi-Hop, Multi-Document Question Answering Benchmark for Large Language Models
    Zhu, Andrew
    Hwang, Alyssa
    Dugan, Liam
    Callison-Burch, Chris
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2: SHORT PAPERS, 2024, : 18 - 37
  • [10] Multi-Agent Reasoning with Large Language Models for Effective Corporate Planning
    Tsao, Wen-Kwang
    2023 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE, CSCI 2023, 2023, : 365 - 370