Performance of Large Language Models ChatGPT and Gemini on Workplace Management Questions in Radiology

被引:0
|
作者
Leutz-Schmidt, Patricia [1 ]
Palm, Viktoria [1 ]
Mathy, Rene Michael [1 ]
Groezinger, Martin [2 ]
Kauczor, Hans-Ulrich [1 ]
Jang, Hyungseok [3 ]
Sedaghat, Sam [1 ]
机构
[1] Univ Hosp Heidelberg, Dept Diagnost & Intervent Radiol, D-69120 Heidelberg, Germany
[2] German Canc Res Ctr, D-69120 Heidelberg, Germany
[3] Univ Calif Davis, Dept Radiol, Davis, CA 95616 USA
关键词
large language models; chatbot; ChatGPT; Gemini; radiology; management; LEADERSHIP;
D O I
10.3390/diagnostics15040497
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background/Objectives: Despite the growing popularity of large language models (LLMs), there remains a notable lack of research examining their role in workplace management. This study aimed to address this gap by evaluating the performance of ChatGPT-3.5, ChatGPT-4.0, Gemini, and Gemini Advanced as famous LLMs in responding to workplace management questions specific to radiology. Methods: ChatGPT-3.5 and ChatGPT-4.0 (both OpenAI, San Francisco, CA, USA) and Gemini and Gemini Advanced (both Google Deep Mind, Mountain View, CA, USA) generated answers to 31 pre-selected questions on four different areas of workplace management in radiology: (1) patient management, (2) imaging and radiation management, (3) learning and personal development, and (4) administrative and department management. Two readers independently evaluated the answers provided by the LLM chatbots. Three 4-point scores were used to assess the quality of the responses: (1) overall quality score (OQS), (2) understandabilityscore (US), and (3) implementability score (IS). The mean quality score (MQS) was calculated from these three scores. Results: The overall inter-rater reliability (IRR) was good for Gemini Advanced (IRR 79%), Gemini (IRR 78%), and ChatGPT-3.5 (IRR 65%), and moderate for ChatGPT-4.0 (IRR 54%). The overall MQS averaged 3.36 (SD: 0.64) for ChatGPT-3.5, 3.75 (SD: 0.43) for ChatGPT-4.0, 3.29 (SD: 0.64) for Gemini, and 3.51 (SD: 0.53) for Gemini Advanced. The highest OQS, US, IS, and MQS were achieved by ChatGPT-4.0 in all categories, followed by Gemini Advanced. ChatGPT-4.0 was the most consistently superior performer and outperformed all other chatbots (p < 0.001-0.002). Gemini Advanced performed significantly better than Gemini (p = 0.003) and showed a non-significant trend toward outperforming ChatGPT-3.5 (p = 0.056). ChatGPT-4.0 provided superior answers in most cases compared with the other LLM chatbots. None of the answers provided by the chatbots were rated "insufficient". Conclusions: All four LLM chatbots performed well on workplace management questions in radiology. ChatGPT-4.0 outperformed ChatGPT-3.5, Gemini, and Gemini Advanced. Our study revealed that LLMs have the potential to improve workplace management in radiology by assisting with various tasks, making these processes more efficient without requiring specialized management skills.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Reply to 'Comment on: Benchmarking the performance of large language models in uveitis: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, Google Gemini, and Anthropic Claude3'
    Zhao, Fang-Fang
    He, Han-Jie
    Liang, Jia-Jian
    Cen, Ling-Ping
    EYE, 2025, : 1433 - 1433
  • [32] Large Language Models in Medical Education: Comparing ChatGPT- to Human-Generated Exam Questions
    Laupichler, Matthias Carl
    Rother, Johanna Flora
    Kadow, Ilona C. Grunwald
    Ahmadi, Seifollah
    Raupach, Tobias
    ACADEMIC MEDICINE, 2024, 99 (05) : 508 - 512
  • [33] Evidence-Based Potential of Generative Artificial Intelligence Large Language Models on Dental Avulsion: ChatGPT Versus Gemini
    Kaplan, Taibe Tokgoz
    Cankar, Muhammet
    DENTAL TRAUMATOLOGY, 2025, 41 (02) : 178 - 186
  • [34] Quo Vadis ChatGPT? From large language models to Large Knowledge Models
    Venkatasubramanian, Venkat
    Chakraborty, Arijit
    COMPUTERS & CHEMICAL ENGINEERING, 2025, 192
  • [36] Large language models (LLMs) in radiology exams for medical students: Performance and consequences
    Gotta, Jennifer
    Hong, Quang Anh Le
    Koch, Vitali
    Gruenewald, Leon D.
    Geyer, Tobias
    Martin, Simon S.
    Scholtz, Jan-Erik
    Booz, Christian
    Dos Santos, Daniel Pinto
    Mahmoudi, Scherwin
    Eichler, Katrin
    Gruber-Rouh, Tatjana
    Hammerstingl, Renate
    Biciusca, Teodora
    Juergens, Lisa Joy
    Hoehne, Elena
    Mader, Christoph
    Vogl, Thomas J.
    Reschke, Philipp
    ROFO-FORTSCHRITTE AUF DEM GEBIET DER RONTGENSTRAHLEN UND DER BILDGEBENDEN VERFAHREN, 2024,
  • [37] Large language models in radiology: Fluctuating performance and decreasing discordance over time
    Gupta, Mitul
    Virostko, John
    Kaufmann, Christopher
    EUROPEAN JOURNAL OF RADIOLOGY, 2025, 182
  • [38] Performance of ChatGPT on basic healthcare leadership and management questions
    Leutz-Schmidt, Patricia
    Groezinger, Martin
    Kauczor, Hans-Ulrich
    Jang, Hyungseok
    Sedaghat, Sam
    HEALTH AND TECHNOLOGY, 2024, 14 (06) : 1161 - 1166
  • [39] Revolutionizing Radiology: The Role of Large Language Models
    Alex, Ajay
    Kesavadas, C.
    INDIAN JOURNAL OF RADIOLOGY AND IMAGING, 2025, 35 (01): : 1 - 1
  • [40] ChatGPT and large language models in academia: opportunities and challenges
    Jesse G. Meyer
    Ryan J. Urbanowicz
    Patrick C. N. Martin
    Karen O’Connor
    Ruowang Li
    Pei-Chen Peng
    Tiffani J. Bright
    Nicholas Tatonetti
    Kyoung Jae Won
    Graciela Gonzalez-Hernandez
    Jason H. Moore
    BioData Mining, 16