Evaluating the interactions of Medical Doctors with chatbots based on large language models: Insights from a nationwide study in the Greek healthcare sector using ChatGPT

被引:2
|
作者
Triantafyllopoulos, Loukas [1 ]
Feretzakis, Georgios [1 ]
Tzelves, Lazaros [2 ]
Sakagianni, Aikaterini [3 ]
Verykios, Vassilios S. [1 ]
Kalles, Dimitris [1 ]
机构
[1] Hellen Open Univ, Sch Sci & Technol, 8 Aristotelous Stz, Patras 26335, Greece
[2] Natl & Kapodistrian Univ Athens, Sismanogleio Gen Hosp, Dept Urol 2, Athens, Greece
[3] Sismanogleio Gen Hosp, Intens Care Unit, Maroussi, Greece
关键词
Artificial intelligence; ChatGPT; Doctor-chatbot interaction; Satisfaction; Large language models; ARTIFICIAL-INTELLIGENCE; CRISIS; GPT-4;
D O I
10.1016/j.chb.2024.108404
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
In this AI-focused era, researchers are delving into AI applications in healthcare, with ChatGPT being a primary focus. This Greek study involved 182 doctors from various regions, utilizing a custom web application connected to ChatGPT 4.0. Doctors from diverse departments and experience levels engaged with ChatGPT, which provided tailored responses. Over a month, data was collected using a form with a 1-to-5 rating scale. The results showed varying satisfaction levels across four criteria: clarity, response time, accuracy, and overall satisfaction. ChatGPT's response speed received high ratings (3.85/5.0), whereas clarity of information was moderately rated (3.43/5.0). A significant observation was the correlation between a doctor's experience and their satisfaction with ChatGPT. More experienced doctors (over 21 years) reported lower satisfaction (2.80-3.74/5.0) compared to their less experienced counterparts (3.43-4.20/5.0). At the medical field level, Internal Medicine showed higher satisfaction in evaluation criteria (ranging from 3.56 to 3.88), compared to other fields, while Psychiatry scored higher overall, with ratings from 3.63 to 5.00. The study also compared two departments: Urology and Internal Medicine, with the latter being more satisfied with the accuracy, and clarity of provided information, response time, and overall compared to Urology. These findings illuminate the specific needs of the health sector and highlight both the potential and areas for improvement in ChatGPT's provision of specialized medical information. Despite current limitations, ChatGPT, in its present version, offers a valuable resource to the medical community, signaling further advancements and potential integration into healthcare practices.
引用
收藏
页数:11
相关论文
共 7 条
  • [1] Evaluating the efficacy of artificial intelligence chatbots in urological health: insights for urologists on patient interactions with large language models
    Simon, Benjamin D.
    Gelikman, David G.
    Turkbey, Baris
    TRANSLATIONAL ANDROLOGY AND UROLOGY, 2024, 13 (05) : 879 - 883
  • [2] Developing Effective Frameworks for Large Language Model-Based Medical Chatbots: Insights From Radiotherapy Education With ChatePT
    Chow, James C. L.
    Li, Kay
    JMIR CANCER, 2025, 11
  • [3] Evaluating the Effectiveness of advanced large language models in medical Knowledge: A Comparative study using Japanese national medical examination
    Liu, Mingxin
    Okuhara, Tsuyoshi
    Dai, Zhehao
    Huang, Wenbo
    Gu, Lin
    Okada, Hiroko
    Furukawa, Emi
    Kiuchi, Takahiro
    INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2025, 193
  • [4] Zero-shot learning to extract assessment criteria and medical services from the preventive healthcare guidelines using large language models
    Luo, Xiao
    Tahabi, Fattah Muhammad
    Marc, Tressica
    Haunert, Laura Ann
    Storey, Susan
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (08) : 1743 - 1753
  • [5] Using Large Language Models to Abstract Complex Social Determinants of Health From Original and Deidentified Medical Notes: Development and Validation Study
    Ralevski, Alexandra
    Taiyab, Nadaa
    Nossal, Michael
    Mico, Lindsay
    Piekos, Samantha
    Hadlock, Jennifer
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26
  • [6] Unpacking unstructured data: A pilot study on extracting insights from neuropathological reports of Parkinson's Disease patients using large language models
    Stroganov, Oleg
    Schedlbauer, Amber
    Lorenzen, Emily
    Kadhim, Alex
    Lobanova, Anna
    Lewis, David A.
    Glausier, Jill R.
    BIOLOGY METHODS & PROTOCOLS, 2024, 9 (01):
  • [7] Extraction and classification of structured data from unstructured hepatobiliary pathology reports using large language models: a feasibility study compared with rules-based natural language processing
    Geevarghese, Ruben
    Sigel, Carlie
    Cadley, John
    Chatterjee, Subrata
    Jain, Pulkit
    Hollingsworth, Alex
    Chatterjee, Avijit
    Swinburne, Nathaniel
    Bilal, Khawaja Hasan
    Marinelli, Brett
    JOURNAL OF CLINICAL PATHOLOGY, 2024,