Large Language Models for Tabular Data: Progresses and Future Directions

被引:0
|
作者
Dong, Haoyu [1 ]
Wang, Zhiruo [2 ]
机构
[1] Microsoft AI, Beijing, Peoples R China
[2] Carnegie Mellon Univ, Pittsburgh, PA USA
关键词
Tabular data; Large language models; Representation learning;
D O I
10.1145/3626772.3661384
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tables contain a significant portion of the world's structured information. The ability to efficiently and accurately understand, process, reason about, analyze, and generate tabular data is critical for achieving Artificial General Intelligence (AGI) systems. However, despite their prevalence and importance, tables present unique challenges due to their structured nature and the diverse semantics embedded within them. Textual content, numerical values, visual formats, and even formulas in tables carry rich semantic information that is often underutilized due to the complexity of accurately interpreting and integrating. Fortunately, the advent of Large Language Models (LLMs) has opened new frontiers in natural language processing (NLP) and machine learning (ML), showing remarkable success in understanding and generating text, code, etc. Applying these advanced models to the domain of tabular data holds the promise of significant breakthroughs in how we process and leverage structured information. Therefore, this tutorial aims to provide a comprehensive study of the advances, challenges, and opportunities in leveraging cutting-edge LLMs for tabular data. By introducing methods of prompting or training cutting-edge LLMs for table interpreting, processing, reasoning, analytics, and generation, we aim to equip researchers and practitioners with the knowledge and tools needed to unlock the full potential of LLMs for tabular data in their domains.
引用
收藏
页码:2997 / 3000
页数:4
相关论文
共 50 条
  • [41] Large Language Model for Vulnerability Detection: Emerging Results and Future Directions
    Zhou, Xin
    Zhang, Ting
    Lo, David
    2024 IEEE/ACM 46TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: NEW IDEAS AND EMERGING RESULTS, ICSE-NIER 2024, 2024, : 47 - 51
  • [42] A Survey of Graph Meets Large Language Model: Progress and Future Directions
    Li, Yuhan
    Li, Zhixun
    Wang, Peisong
    Li, Jia
    Sun, Xiangguo
    Cheng, Hong
    Yu, Jeffrey Xu
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 8123 - 8131
  • [43] Generative models for tabular data: A review
    Kim, Dong-Keon
    Ryu, Dongheum
    Lee, Yongbin
    Choi, Dong-Hoon
    JOURNAL OF MECHANICAL SCIENCE AND TECHNOLOGY, 2024, 38 (09) : 4989 - 5005
  • [44] Computational Modeling of Bilingual Language Learning: Current Models and Future Directions
    Li, Ping
    Xu, Qihui
    LANGUAGE LEARNING, 2023, 73 : 17 - 64
  • [45] SCULPT: A Schema Language for Tabular Data on the Web
    Martens, Wim
    Neven, Frank
    Vansummeren, Stijn
    PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB (WWW 2015), 2015, : 702 - 712
  • [46] Large language models challenge the future of higher education
    Milano, Silvia
    McGrane, Joshua A.
    Leonelli, Sabina
    NATURE MACHINE INTELLIGENCE, 2023, 5 (04) : 333 - 334
  • [47] Based on Medicine, The Now and Future of Large Language Models
    Su, Ziqing
    Tang, Guozhang
    Huang, Rui
    Qiao, Yang
    Zhang, Zheng
    Dai, Xingliang
    CELLULAR AND MOLECULAR BIOENGINEERING, 2024, 17 (04) : 263 - 277
  • [48] Large Language Models for Recommendation: Past, Present, and Future
    Bao, Keqin
    Zhang, Jizhi
    Lin, Xinyu
    Zhang, Yang
    Wang, Wenjie
    Feng, Fuli
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 2993 - 2996
  • [49] Large language models challenge the future of higher education
    Silvia Milano
    Joshua A. McGrane
    Sabina Leonelli
    Nature Machine Intelligence, 2023, 5 : 333 - 334
  • [50] Large Language Models in Neurology Research and Future Practice
    Romano, Michael F.
    Shih, Ludy C.
    Paschalidis, Ioannis C.
    Au, Rhoda
    Kolachalama, Vijaya B.
    NEUROLOGY, 2023, 101 (23) : 1058 - 1067