Assessing the applicability and appropriateness of ChatGPT in answering clinical pharmacy questions

被引:10
|
作者
Fournier, A. [1 ]
Fallet, C. [1 ]
Sadeghipour, F. [1 ,2 ,3 ,4 ]
Perrottet, N. [1 ,2 ]
机构
[1] Ctr Hosp Univ Vaudois CHUV, Serv Pharm, Lausanne, Switzerland
[2] Univ Geneva, Univ Lausanne, Sch Pharmaceut Sci, Geneva, Switzerland
[3] Lausanne Univ Hosp, Ctr Res & Innovat Clin Pharmaceut Sci, Lausanne, Switzerland
[4] Univ Lausanne, Lausanne, Switzerland
来源
ANNALES PHARMACEUTIQUES FRANCAISES | 2024年 / 82卷 / 03期
关键词
Artificial intelligence; Large language models; ChatGPT; Clinical pharmacy; Healthcare professionals' issues; RISKS;
D O I
10.1016/j.pharma.2023.11.001
中图分类号
R9 [药学];
学科分类号
1007 ;
摘要
Objectives. - Clinical pharmacists rely on different scientific references to ensure appropriate, safe, and cost-effective drug use. Tools based on artificial intelligence (AI) such as ChatGPT (Generative Pre-trained Transformer) could offer valuable support. The objective of this study was to assess ChatGPT's capacity to correctly respond to clinical pharmacy questions asked by healthcare professionals in our university hospital. Material and methods. - ChatGPT's capacity to respond correctly to the last 100 consecutive questions recorded in our clinical pharmacy database was assessed. Questions were copied from our FileMaker Pro database and pasted into ChatGPT March 14 version online platform. The generated answers were then copied verbatim into an Excel file. Two blinded clinical pharmacists reviewed all the questions and the answers given by the software. In case of disagreements, a third blinded pharmacist intervened to decide. Results. - Documentation-related issues (n n = 36) and drug administration mode (n n = 30) were preponderantly recorded. Among 69 applicable questions, the rate of correct answers varied from 30 to 57.1% depending on questions type with a global rate of 44.9%. Regarding inappropriate answers (n n = 38), 20 were incorrect, 18 gave no answers and 8 were incomplete with 8 answers belonging to 2 different categories. No better answers than the pharmacists were observed. Conclusions. - ChatGPT demonstrated a mitigated performance in answering clinical pharmacy questions. It should not replace human expertise as a high rate of inappropriate answers was highlighted. Future studies should focus on the optimization of ChatGPT for specific clinical pharmacy questions and explore the potential benefits and limitations of integrating this technology into clinical practice. (c) 2023 Acade<acute accent>mie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
引用
收藏
页码:507 / 513
页数:7
相关论文
共 50 条
  • [41] Assessing the Accuracy of ChatGPT on Core Questions in Glomerular Disease
    Miao, Jing
    Thongprayoon, Charat
    Cheungpasitporn, Wisit
    KIDNEY INTERNATIONAL REPORTS, 2023, 8 (08): : 1657 - 1659
  • [42] Assessing ChatGPT's Responses to Otolaryngology Patient Questions
    Carnino, Jonathan M.
    Pellegrini, William R.
    Willis, Megan
    Cohen, Michael B.
    Paz-Lansberg, Marianella
    Davis, Elizabeth M.
    Grillone, Gregory A.
    Levi, Jessica R.
    ANNALS OF OTOLOGY RHINOLOGY AND LARYNGOLOGY, 2024, 133 (07): : 658 - 664
  • [43] Evaluating ChatGPT's performance in answering common patient questions on cervical cancer
    Do, Anthony
    Li, Andrew
    Smith, Haller
    Chambers, Laura
    Esselen, Kate
    Liang, Margaret
    GYNECOLOGIC ONCOLOGY, 2024, 190 : S376 - S376
  • [44] Comparative performance analysis of ChatGPT 3.5, ChatGPT 4.0 and Bard in answering common patient questions on melanoma<show/>
    Deliyannis, Eduardo Panaiotis
    Paul, Navreet
    Patel, Priya U.
    Papanikolaou, Marieta
    CLINICAL AND EXPERIMENTAL DERMATOLOGY, 2024, 49 (07) : 743 - 746
  • [45] Assessing GPT-4's accuracy in answering clinical pharmacological questions on pain therapy
    Stroop, Anna
    Stroop, Tabea
    Alsofy, Samer Zawy
    Wegner, Moritz
    Nakamura, Makoto
    Stroop, Ralf
    BRITISH JOURNAL OF CLINICAL PHARMACOLOGY, 2025,
  • [46] A Comparative Analysis of ChatGPT-4, Microsoft's Bing and Google's Bard at Answering Rheumatology Clinical Questions
    Yingchoncharoen, Pitchaporn
    Chaisrimaneepan, Nattanicha
    Pangkanon, Watsachon
    Thongpiya, Jerapas
    ARTHRITIS & RHEUMATOLOGY, 2024, 76 : 2654 - 2655
  • [47] Evaluating ChatGPT's Accuracy in Answering the American Academy of Dermatology's Clinical Guideline Questions for Cutaneous Melanoma (2019)
    Alani, O.
    Fayed, A.
    Patel, D.
    Wahood, S.
    Alasadi, H.
    Chan, W.
    JOURNAL OF INVESTIGATIVE DERMATOLOGY, 2024, 144 (08) : S180 - S180
  • [48] Evaluating the Performance of ChatGPT in answering questions related to benign prostate hyperplasia and prostate cancer
    Caglar, Ufuk
    Yildiz, Oguzhan
    Meric, Arda
    Ayranci, Ali
    Yusuf, Resit
    Sarilar, Omer
    Ozgor, Faruk
    MINERVA UROLOGY AND NEPHROLOGY, 2023, 75 (06): : 729 - 733
  • [49] Evaluating ChatGPT's Performance in Answering Questions About Allergic Rhinitis and Chronic Rhinosinusitis
    Ye, Fan
    Zhang, He
    Luo, Xin
    Wu, Tong
    Yang, Qintai
    Shi, Zhaohui
    OTOLARYNGOLOGY-HEAD AND NECK SURGERY, 2024, 171 (02) : 571 - 577
  • [50] ChatGPT Versus Consultants: Blinded Evaluation on Answering Otorhinolaryngology Case-Based Questions
    Buhr, Christoph Raphael
    Smith, Harry
    Huppertz, Tilman
    Bahr-Hamm, Katharina
    Matthias, Christoph
    Blaikie, Andrew
    Kelsey, Tom
    Kuhn, Sebastian
    Eckrich, Jonas
    JMIR MEDICAL EDUCATION, 2023, 9