Risk communication and large language models

被引:0
|
作者
Sledge, Daniel [1 ]
Thomas, Herschel F. [2 ]
机构
[1] Univ Oklahoma Hlth Sci, Hudson Coll Publ Hlth, 801 NE 13th St,Room 369,POB 26901, Oklahoma City, OK 73104 USA
[2] Univ Texas Austin, Lyndon B Johnson Sch Publ Affairs, Austin, TX USA
关键词
disaster planning and preparedness; large language models; risk communication; SOCIAL MEDIA; INFORMATION;
D O I
10.1002/rhc3.12303
中图分类号
C93 [管理学]; D035 [国家行政管理]; D523 [行政管理]; D63 [国家行政管理];
学科分类号
12 ; 1201 ; 1202 ; 120202 ; 1204 ; 120401 ;
摘要
The widespread embrace of Large Language Models (LLMs) integrated with chatbot interfaces, such as ChatGPT, represents a potentially critical moment in the development of risk communication and management. In this article, we consider the implications of the current wave of LLM-based chat programs for risk communication. We examine ChatGPT-generated responses to 24 different hazard situations. We compare these responses to guidelines published for public consumption on the US Department of Homeland Security's Ready.gov website. We find that, although ChatGPT did not generate false or misleading responses, ChatGPT responses were typically less than optimal in terms of their similarity to guidances from the federal government. While delivered in an authoritative tone, these responses at times omitted important information and contained points of emphasis that were substantially different than those from Ready.gov. Moving forward, it is critical that researchers and public officials both seek to harness the power of LLMs to inform the public and acknowledge the challenges represented by a potential shift in information flows away from public officials and experts and towards individuals.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] On the Risk of Misinformation Pollution with Large Language Models
    Pan, Yikang
    Pan, Liangming
    Chen, Wenhu
    Nakov, Preslav
    Kan, Min-Yen
    Wang, William Yang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 1389 - 1403
  • [2] Large Language Models Empower Multimodal Integrated Sensing and Communication
    Cheng, Lu
    Zhang, Hongliang
    Di, Boya
    Niyato, Dusit
    Song, Lingyang
    IEEE COMMUNICATIONS MAGAZINE, 2025,
  • [3] Conversational Complexity for Assessing Risk in Large Language Models
    Burden, John
    Cebrian, Manuel
    Hernandez-Orallo, Jose
    arXiv,
  • [4] Exploring Variability in Risk Taking With Large Language Models
    Bhatia, Sudeep
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2024, 153 (07) : 1838 - 1860
  • [5] A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models
    Song, Yuan-Feng
    He, Yuan-Qin
    Zhao, Xue-Fang
    Gu, Han-Lin
    Jiang, Di
    Yang, Hai-Jun
    Fan, Li-Xin
    Journal of Computer Science and Technology, 2024, 39 (04) : 984 - 1004
  • [6] How good are large language models at product risk assessment?
    Collier, Zachary A.
    Gruss, Richard J.
    Abrahams, Alan S.
    RISK ANALYSIS, 2024,
  • [7] LARGE LANGUAGE MODELS FOR RISK OF BIAS ASSESSMENT: A CASE STUDY
    Edwards, M.
    Bishop, E.
    Reddish, K.
    Carr, E.
    di Ruffano, L. Ferrante
    VALUE IN HEALTH, 2024, 27 (12)
  • [8] Risk or Chance? Large Language Models and Reproducibility in HCI Research
    Kosch, Thomas
    Feger, Sebastian
    Interactions (N.Y.), 2024, 31 (06) : 44 - 49
  • [9] Mitigating the risk of health inequity exacerbated by large language models
    Yuelyu Ji
    Wenhe Ma
    Sonish Sivarajkumar
    Hang Zhang
    Eugene M Sadhu
    Zhuochun Li
    Xizhi Wu
    Shyam Visweswaran
    Yanshan Wang
    npj Digital Medicine, 8 (1)
  • [10] Assessing the risk of takeover catastrophe from large language models
    Baum, Seth D.
    RISK ANALYSIS, 2024,