Performance of Large Language Models ChatGPT and Gemini on Workplace Management Questions in Radiology

被引:0
|
作者
Leutz-Schmidt, Patricia [1 ]
Palm, Viktoria [1 ]
Mathy, Rene Michael [1 ]
Groezinger, Martin [2 ]
Kauczor, Hans-Ulrich [1 ]
Jang, Hyungseok [3 ]
Sedaghat, Sam [1 ]
机构
[1] Univ Hosp Heidelberg, Dept Diagnost & Intervent Radiol, D-69120 Heidelberg, Germany
[2] German Canc Res Ctr, D-69120 Heidelberg, Germany
[3] Univ Calif Davis, Dept Radiol, Davis, CA 95616 USA
关键词
large language models; chatbot; ChatGPT; Gemini; radiology; management; LEADERSHIP;
D O I
10.3390/diagnostics15040497
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background/Objectives: Despite the growing popularity of large language models (LLMs), there remains a notable lack of research examining their role in workplace management. This study aimed to address this gap by evaluating the performance of ChatGPT-3.5, ChatGPT-4.0, Gemini, and Gemini Advanced as famous LLMs in responding to workplace management questions specific to radiology. Methods: ChatGPT-3.5 and ChatGPT-4.0 (both OpenAI, San Francisco, CA, USA) and Gemini and Gemini Advanced (both Google Deep Mind, Mountain View, CA, USA) generated answers to 31 pre-selected questions on four different areas of workplace management in radiology: (1) patient management, (2) imaging and radiation management, (3) learning and personal development, and (4) administrative and department management. Two readers independently evaluated the answers provided by the LLM chatbots. Three 4-point scores were used to assess the quality of the responses: (1) overall quality score (OQS), (2) understandabilityscore (US), and (3) implementability score (IS). The mean quality score (MQS) was calculated from these three scores. Results: The overall inter-rater reliability (IRR) was good for Gemini Advanced (IRR 79%), Gemini (IRR 78%), and ChatGPT-3.5 (IRR 65%), and moderate for ChatGPT-4.0 (IRR 54%). The overall MQS averaged 3.36 (SD: 0.64) for ChatGPT-3.5, 3.75 (SD: 0.43) for ChatGPT-4.0, 3.29 (SD: 0.64) for Gemini, and 3.51 (SD: 0.53) for Gemini Advanced. The highest OQS, US, IS, and MQS were achieved by ChatGPT-4.0 in all categories, followed by Gemini Advanced. ChatGPT-4.0 was the most consistently superior performer and outperformed all other chatbots (p < 0.001-0.002). Gemini Advanced performed significantly better than Gemini (p = 0.003) and showed a non-significant trend toward outperforming ChatGPT-3.5 (p = 0.056). ChatGPT-4.0 provided superior answers in most cases compared with the other LLM chatbots. None of the answers provided by the chatbots were rated "insufficient". Conclusions: All four LLM chatbots performed well on workplace management questions in radiology. ChatGPT-4.0 outperformed ChatGPT-3.5, Gemini, and Gemini Advanced. Our study revealed that LLMs have the potential to improve workplace management in radiology by assisting with various tasks, making these processes more efficient without requiring specialized management skills.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Comment on: "Benchmarking the performance of large language models in uveitis: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, Google Gemini, and Anthropic Claude3"
    Luo, Xiao
    Tang, Cheng
    Chen, Jin-Jin
    Yuan, Jin
    Huang, Jin-Jin
    Yan, Tao
    EYE, 2025, : 1432 - 1432
  • [22] Performance of Large Language Models on Medical Oncology Examination Questions
    Longwell, Jack B.
    Hirsch, Ian
    Binder, Fernando
    Conchas, Galileo Arturo Gonzalez
    Mau, Daniel
    Jang, Raymond
    Krishnan, Rahul G.
    Grant, Robert C.
    JAMA NETWORK OPEN, 2024, 7 (06) : e2417641
  • [23] The accuracy of large language models in RANZCR's clinical radiology exam sample questions
    Besler, Muhammed Said
    JAPANESE JOURNAL OF RADIOLOGY, 2024, 42 (09) : 1080 - 1080
  • [24] Reply to "Assessing the diagnostic performance of large language models with European Diploma in Musculoskeletal Radiology (EDiMSK) examination sample questions"
    Harigai, Ayaka
    Toyama, Yoshitaka
    Takase, Kei
    JAPANESE JOURNAL OF RADIOLOGY, 2024, 42 (06) : 675 - 676
  • [25] Using large language models (ChatGPT, Copilot, PaLM, Bard, and Gemini) in Gross Anatomy course: Comparative analysis
    Mavrych, Volodymyr
    Ganguly, Paul
    Bolgova, Olena
    CLINICAL ANATOMY, 2025, 38 (02) : 200 - 210
  • [26] Comparing the Spatial Querying Capacity of Large Language Models: OpenAI's ChatGPT and Google's Gemini Pro
    Renshaw, Andrea
    Lourentzou, Ismini
    Lee, Jinhyung
    Crawford, Thomas
    Kim, Junghwan
    PROFESSIONAL GEOGRAPHER, 2025, 77 (02): : 186 - 198
  • [27] Political Bias in Large Language Models: A Comparative Analysis of ChatGPT-4, Perplexity, Google Gemini, and Claude
    Choudhary, Tavishi
    IEEE ACCESS, 2025, 13 : 11341 - 11379
  • [28] Large Language Models for Intraoperative Decision Support in Plastic Surgery: A Comparison between ChatGPT-4 and Gemini
    Gomez-Cabello, Cesar A.
    Borna, Sahar
    Pressman, Sophia M.
    Haider, Syed Ali
    Forte, Antonio J.
    MEDICINA-LITHUANIA, 2024, 60 (06):
  • [29] Translating classical Arabic verse: human translation vs. AI large language models (Gemini and ChatGPT)
    Farghal, Mohammed
    Haider, Ahmad S.
    COGENT SOCIAL SCIENCES, 2024, 10 (01):
  • [30] LARGE LANGUAGE MODELS (LLMS) AND CHATGPT FOR BIOMEDICINE
    Arighi, Cecilia
    Brenner, Steven
    Lu, Zhiyong
    BIOCOMPUTING 2024, PSB 2024, 2024, : 641 - 644