Risk communication and large language models

被引:0
|
作者
Sledge, Daniel [1 ]
Thomas, Herschel F. [2 ]
机构
[1] Univ Oklahoma Hlth Sci, Hudson Coll Publ Hlth, 801 NE 13th St,Room 369,POB 26901, Oklahoma City, OK 73104 USA
[2] Univ Texas Austin, Lyndon B Johnson Sch Publ Affairs, Austin, TX USA
关键词
disaster planning and preparedness; large language models; risk communication; SOCIAL MEDIA; INFORMATION;
D O I
10.1002/rhc3.12303
中图分类号
C93 [管理学]; D035 [国家行政管理]; D523 [行政管理]; D63 [国家行政管理];
学科分类号
12 ; 1201 ; 1202 ; 120202 ; 1204 ; 120401 ;
摘要
The widespread embrace of Large Language Models (LLMs) integrated with chatbot interfaces, such as ChatGPT, represents a potentially critical moment in the development of risk communication and management. In this article, we consider the implications of the current wave of LLM-based chat programs for risk communication. We examine ChatGPT-generated responses to 24 different hazard situations. We compare these responses to guidelines published for public consumption on the US Department of Homeland Security's Ready.gov website. We find that, although ChatGPT did not generate false or misleading responses, ChatGPT responses were typically less than optimal in terms of their similarity to guidances from the federal government. While delivered in an authoritative tone, these responses at times omitted important information and contained points of emphasis that were substantially different than those from Ready.gov. Moving forward, it is critical that researchers and public officials both seek to harness the power of LLMs to inform the public and acknowledge the challenges represented by a potential shift in information flows away from public officials and experts and towards individuals.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] On Finetuning Large Language Models
    Wang, Yu
    POLITICAL ANALYSIS, 2023,
  • [42] Large Language Models and Psychoeducation
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    JOURNAL OF ECT, 2024, 40 (01) : e1 - e1
  • [43] Large Language Models and Security
    Bezzi, Michele
    IEEE SECURITY & PRIVACY, 2024, 22 (02) : 60 - 68
  • [44] Large Language Models and Biorisk
    D'Alessandro, William
    Lloyd, Harry R.
    Sharadin, Nathaniel
    AMERICAN JOURNAL OF BIOETHICS, 2023, 23 (10): : 115 - 118
  • [45] Large Language Models in Orthopaedics
    Yao, Jie J.
    Aggarwal, Manan
    Lopez, Ryan D.
    Namdari, Surena
    JOURNAL OF BONE AND JOINT SURGERY-AMERICAN VOLUME, 2024, 106 (15): : 1411 - 1418
  • [46] Large language models in science
    Kowalewski, Karl-Friedrich
    Rodler, Severin
    UROLOGIE, 2024, 63 (09): : 860 - 866
  • [47] Einsatzmöglichkeiten von „large language models“ in der OnkologieApplications of large language models in oncology
    Chiara M. Loeffler
    Keno K. Bressem
    Daniel Truhn
    Die Onkologie, 2024, 30 (5) : 388 - 393
  • [48] A Security Risk Taxonomy for Prompt-Based Interaction With Large Language Models
    Derner, Erik
    Batistic, Kristina
    Zahalka, Jan
    Babuska, Robert
    IEEE ACCESS, 2024, 12 : 126176 - 126187
  • [49] Coal Mine Accident Risk Analysis with Large Language Models and Bayesian Networks
    Du, Gu
    Chen, An
    SUSTAINABILITY, 2025, 17 (05)
  • [50] From Large Language Models to Large Multimodal Models: A Literature Review
    Huang, Dawei
    Yan, Chuan
    Li, Qing
    Peng, Xiaojiang
    APPLIED SCIENCES-BASEL, 2024, 14 (12):