Performance of Large Language Models ChatGPT and Gemini on Workplace Management Questions in Radiology

被引:0
|
作者
Leutz-Schmidt, Patricia [1 ]
Palm, Viktoria [1 ]
Mathy, Rene Michael [1 ]
Groezinger, Martin [2 ]
Kauczor, Hans-Ulrich [1 ]
Jang, Hyungseok [3 ]
Sedaghat, Sam [1 ]
机构
[1] Univ Hosp Heidelberg, Dept Diagnost & Intervent Radiol, D-69120 Heidelberg, Germany
[2] German Canc Res Ctr, D-69120 Heidelberg, Germany
[3] Univ Calif Davis, Dept Radiol, Davis, CA 95616 USA
关键词
large language models; chatbot; ChatGPT; Gemini; radiology; management; LEADERSHIP;
D O I
10.3390/diagnostics15040497
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background/Objectives: Despite the growing popularity of large language models (LLMs), there remains a notable lack of research examining their role in workplace management. This study aimed to address this gap by evaluating the performance of ChatGPT-3.5, ChatGPT-4.0, Gemini, and Gemini Advanced as famous LLMs in responding to workplace management questions specific to radiology. Methods: ChatGPT-3.5 and ChatGPT-4.0 (both OpenAI, San Francisco, CA, USA) and Gemini and Gemini Advanced (both Google Deep Mind, Mountain View, CA, USA) generated answers to 31 pre-selected questions on four different areas of workplace management in radiology: (1) patient management, (2) imaging and radiation management, (3) learning and personal development, and (4) administrative and department management. Two readers independently evaluated the answers provided by the LLM chatbots. Three 4-point scores were used to assess the quality of the responses: (1) overall quality score (OQS), (2) understandabilityscore (US), and (3) implementability score (IS). The mean quality score (MQS) was calculated from these three scores. Results: The overall inter-rater reliability (IRR) was good for Gemini Advanced (IRR 79%), Gemini (IRR 78%), and ChatGPT-3.5 (IRR 65%), and moderate for ChatGPT-4.0 (IRR 54%). The overall MQS averaged 3.36 (SD: 0.64) for ChatGPT-3.5, 3.75 (SD: 0.43) for ChatGPT-4.0, 3.29 (SD: 0.64) for Gemini, and 3.51 (SD: 0.53) for Gemini Advanced. The highest OQS, US, IS, and MQS were achieved by ChatGPT-4.0 in all categories, followed by Gemini Advanced. ChatGPT-4.0 was the most consistently superior performer and outperformed all other chatbots (p < 0.001-0.002). Gemini Advanced performed significantly better than Gemini (p = 0.003) and showed a non-significant trend toward outperforming ChatGPT-3.5 (p = 0.056). ChatGPT-4.0 provided superior answers in most cases compared with the other LLM chatbots. None of the answers provided by the chatbots were rated "insufficient". Conclusions: All four LLM chatbots performed well on workplace management questions in radiology. ChatGPT-4.0 outperformed ChatGPT-3.5, Gemini, and Gemini Advanced. Our study revealed that LLMs have the potential to improve workplace management in radiology by assisting with various tasks, making these processes more efficient without requiring specialized management skills.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Evaluation of ChatGPT and Gemini large language models for pharmacometrics with NONMEM
    Shin, Euibeom
    Yu, Yifan
    Bies, Robert R.
    Ramanathan, Murali
    JOURNAL OF PHARMACOKINETICS AND PHARMACODYNAMICS, 2024, 51 (03) : 187 - 197
  • [2] ChatGPT and Gemini large language models for pharmacometrics with NONMEM: comment
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    JOURNAL OF PHARMACOKINETICS AND PHARMACODYNAMICS, 2024, 51 (04) : 303 - 304
  • [3] Comparative performance analysis of large language models: ChatGPT-3.5, ChatGPT-4 and Google Gemini in glucocorticoid-induced osteoporosis
    Tong, Linjian
    Zhang, Chaoyang
    Liu, Rui
    Yang, Jia
    Sun, Zhiming
    JOURNAL OF ORTHOPAEDIC SURGERY AND RESEARCH, 2024, 19 (01):
  • [4] A Performance Evaluation of Large Language Models in Keratoconus: A Comparative Study of ChatGPT-3.5, ChatGPT-4.0, Gemini, Copilot, Chatsonic, and Perplexity
    Reyhan, Ali Hakim
    Mutaf, Cagri
    Uzun, Irfan
    Yuksekyayla, Funda
    JOURNAL OF CLINICAL MEDICINE, 2024, 13 (21)
  • [5] Reliability of large language models for advanced head and neck malignancies management: a comparison between ChatGPT 4 and Gemini Advanced
    Lorenzi, Andrea
    Pugliese, Giorgia
    Maniaci, Antonino
    Lechien, Jerome R.
    Allevi, Fabiana
    Boscolo-Rizzo, Paolo
    Vaira, Luigi Angelo
    Saibene, Alberto Maria
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2024, 281 (09) : 5001 - 5006
  • [6] Large language models (LLMs) in the evaluation of emergency radiology reports: performance of ChatGPT-4, Perplexity, and Bard
    Infante, A.
    Gaudino, S.
    Orsini, F.
    Del Ciello, A.
    Gulli, C.
    Merlino, B.
    Natale, L.
    Iezzi, R.
    Sala, E.
    CLINICAL RADIOLOGY, 2024, 79 (02) : 102 - 106
  • [7] Large language models for structured reporting in radiology: performance of GPT-4, ChatGPT-3.5, Perplexity and Bing
    Mallio, Carlo A.
    Sertorio, Andrea C.
    Bernetti, Caterina
    Beomonte Zobel, Bruno
    RADIOLOGIA MEDICA, 2023, 128 (07): : 808 - 812
  • [8] Large language models for structured reporting in radiology: performance of GPT-4, ChatGPT-3.5, Perplexity and Bing
    Carlo A. Mallio
    Andrea C. Sertorio
    Caterina Bernetti
    Bruno Beomonte Zobel
    La radiologia medica, 2023, 128 : 808 - 812
  • [9] Re: Large language models (LLMs) in evaluation of emergency radiology reports: performance of ChatGPT-4, Perplexity and Bard
    Wiwanitkit, S.
    Wiwanitkit, V.
    CLINICAL RADIOLOGY, 2024, 79 (04)
  • [10] Benchmarking the performance of large language models in uveitis: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, Google Gemini, and Anthropic Claude3
    Zhao, Fang-Fang
    He, Han-Jie
    Liang, Jia-Jian
    Cen, Jingyun
    Wang, Yun
    Lin, Hongjie
    Chen, Feifei
    Li, Tai-Ping
    Yang, Jian-Feng
    Chen, Lan
    Cen, Ling-Ping
    EYE, 2024,