Can Large Language Models Assist in Hazard Analysis?

被引:0
|
作者
Diemert, Simon [1 ]
Weber, Jens H. [1 ]
机构
[1] Univ Victoria, Victoria, BC, Canada
关键词
Hazard Analysis; Artificial Intelligence; Large Language Models; Co-Hazard Analysis;
D O I
10.1007/978-3-031-40953-0_35
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs), such as GPT-3, have demonstrated remarkable natural language processing and generation capabilities and have been applied to a variety tasks, such as source code generation. This paper explores the potential of integrating LLMs in the hazard analysis for safety-critical systems, a process which we refer to as co-hazard analysis (CoHA). In CoHA, a human analyst interacts with an LLM via a context-aware chat session and uses the responses to support elicitation of possible hazard causes. In a preliminary experiment, we explore CoHA with three increasingly complex versions of a simple system, using Open AI's ChatGPT service. The quality of ChatGPT's responses were systematically assessed to determine the feasibility of CoHA given the current state of LLM technology. The results suggest that LLMs may be useful for supporting human analysts performing hazard analysis.
引用
收藏
页码:410 / 422
页数:13
相关论文
共 50 条
  • [21] Can Large Language Models Better Predict Software Vulnerability?
    Katsadouros, Evangelos
    Patrikakis, Charalampos Z.
    Hurlburt, George
    IT PROFESSIONAL, 2023, 25 (03) : 4 - 8
  • [22] Can large language models reason about medical questions?
    Lievin, Valentin
    Hother, Christoffer Egeberg
    Motzfeldt, Andreas Geert
    Winther, Ole
    PATTERNS, 2024, 5 (03):
  • [23] Can ChatGPT Truly Overcome Other Large Language Models?
    Ray, Partha
    CANADIAN ASSOCIATION OF RADIOLOGISTS JOURNAL-JOURNAL DE L ASSOCIATION CANADIENNE DES RADIOLOGISTES, 2024, 75 (02): : 429 - 429
  • [24] Automated Topic Analysis with Large Language Models
    Kirilenko, Andrei
    Stepchenkova, Svetlana
    INFORMATION AND COMMUNICATION TECHNOLOGIES IN TOURISM 2024, ENTER 2024, 2024, : 29 - 34
  • [25] Multimodal large language models for bioimage analysis
    Zhang, Shanghang
    Dai, Gaole
    Huang, Tiejun
    Chen, Jianxu
    NATURE METHODS, 2024, 21 (08) : 1390 - 1393
  • [26] A SPECIFICATION LANGUAGE TO ASSIST IN ANALYSIS OF DISCRETE EVENT SIMULATION-MODELS
    OVERSTREET, CM
    NANCE, RE
    COMMUNICATIONS OF THE ACM, 1985, 28 (02) : 190 - 201
  • [27] Leveraging large language models to assist philosophical counseling: prospective techniques, value, and challenges
    Bokai Chen
    Weiwei Zheng
    Liang Zhao
    Xiaojun Ding
    Humanities and Social Sciences Communications, 12 (1):
  • [28] Can large language models be sensitive to culture suicide risk assessment?
    Inbar Levkovich
    S. Shinan-Altman
    Zohar Elyoseph
    Journal of Cultural Cognitive Science, 2024, 8 (3) : 275 - 287
  • [29] Large language models can outperform humans in social situational judgments
    Justin M. Mittelstädt
    Julia Maier
    Panja Goerke
    Frank Zinn
    Michael Hermes
    Scientific Reports, 14 (1)
  • [30] Large language models can segment narrative events similarly to humans
    Sebastian Michelmann
    Manoj Kumar
    Kenneth A. Norman
    Mariya Toneva
    Behavior Research Methods, 57 (1)