Can large language models effectively reason about adverse weather conditions?

被引:0
|
作者
Zafarmomen, Nima [1 ]
Samadi, Vidya [1 ,2 ]
机构
[1] Clemson Univ, Dept Agr Sci, Clemson, SC 29634 USA
[2] Clemson Univ, Artificial Intelligence Res Inst Sci & Engn AIRISE, Sch Comp, Clemson, SC USA
基金
美国国家科学基金会;
关键词
Large language model; Text classification; LLaMA; BART; BERT; Adverse weather conditions;
D O I
10.1016/j.envsoft.2025.106421
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
This paper seeks to answer the question "can Large Language Models (LLMs) effectively reason about adverse weather conditions?". To address this question, we utilized multiple LLMs to harness the US National Weather Service (NWS) flood report data spanning from June 2005 to September 2024. Bidirectional and Auto-Regressive Transformer (BART), Bidirectional Encoder Representations from Transformers (BERT), Large Language Model Meta AI (LLaMA-2), LLaMA-3, and LLaMA-3.1 were employed to categorize data based on predefined labels. The methodology was implemented in Charleston County, South Carolina, USA. Extreme events were unevenly distributed across the training period with the "Cyclonic" category exhibiting significantly fewer instances compared to the "Flood" and "Thunderstorm" categories. Analysis suggests that the LLaMA-3 reached its peak performance at 60% of the dataset size while other LLMs achieved peak performance at approximately 80-100% of the dataset size. This study provided deep insights into the application of LLMs in reasoning adverse weather conditions.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Can large language models reason about medical questions?
    Lievin, Valentin
    Hother, Christoffer Egeberg
    Motzfeldt, Andreas Geert
    Winther, Ole
    PATTERNS, 2024, 5 (03):
  • [2] Can large language models reason and plan?
    Kambhampati, Subbarao
    ANNALS OF THE NEW YORK ACADEMY OF SCIENCES, 2024, 1534 (01) : 15 - 18
  • [3] Can Agrometeorological Indices of Adverse Weather Conditions Help to Improve Yield Prediction by Crop Models?
    Lalic, Branislava
    Eitzinger, Josef
    Thaler, Sabina
    Vucetic, Visnjica
    Nejedlik, Pavol
    Eckersten, Henrik
    Jacimovic, Goran
    Nikolic-Djoric, Emilija
    ATMOSPHERE, 2014, 5 (04): : 1020 - 1041
  • [4] Can Pretrained Language Models (Yet) Reason Deductively?
    Yuan, Zhangdie
    Hu, Songbo
    Vulic, Ivan
    Korhonen, Anna
    Meng, Zaiqiao
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1447 - 1462
  • [5] Talking about Large Language Models
    Shanahan, Murray
    COMMUNICATIONS OF THE ACM, 2024, 67 (02) : 68 - 79
  • [6] Employing large language models safely and effectively as a practicing neurosurgeon
    Advait Patil
    Paul Serrato
    Gracie Cleaver
    Daniela Limbania
    Alfred Pokmeng See
    Kevin T. Huang
    Acta Neurochirurgica, 167 (1)
  • [7] Can large language models write reflectively
    Li Y.
    Sha L.
    Yan L.
    Lin J.
    Raković M.
    Galbraith K.
    Lyons K.
    Gašević D.
    Chen G.
    Computers and Education: Artificial Intelligence, 2023, 4
  • [8] Can large language models apply the law?
    Marcos, Henrique
    AI & SOCIETY, 2024,
  • [9] Can Large Language Models Help Healthcare?
    Miyamoto, Yoshihiro
    JOURNAL OF ATHEROSCLEROSIS AND THROMBOSIS, 2024,
  • [10] Can large language models understand molecules?
    Sadeghi, Shaghayegh
    Bui, Alan
    Forooghi, Ali
    Lu, Jianguo
    Ngom, Alioune
    BMC BIOINFORMATICS, 2024, 25 (01):