Assessing the applicability and appropriateness of ChatGPT in answering clinical pharmacy questions

被引:10
|
作者
Fournier, A. [1 ]
Fallet, C. [1 ]
Sadeghipour, F. [1 ,2 ,3 ,4 ]
Perrottet, N. [1 ,2 ]
机构
[1] Ctr Hosp Univ Vaudois CHUV, Serv Pharm, Lausanne, Switzerland
[2] Univ Geneva, Univ Lausanne, Sch Pharmaceut Sci, Geneva, Switzerland
[3] Lausanne Univ Hosp, Ctr Res & Innovat Clin Pharmaceut Sci, Lausanne, Switzerland
[4] Univ Lausanne, Lausanne, Switzerland
来源
ANNALES PHARMACEUTIQUES FRANCAISES | 2024年 / 82卷 / 03期
关键词
Artificial intelligence; Large language models; ChatGPT; Clinical pharmacy; Healthcare professionals' issues; RISKS;
D O I
10.1016/j.pharma.2023.11.001
中图分类号
R9 [药学];
学科分类号
1007 ;
摘要
Objectives. - Clinical pharmacists rely on different scientific references to ensure appropriate, safe, and cost-effective drug use. Tools based on artificial intelligence (AI) such as ChatGPT (Generative Pre-trained Transformer) could offer valuable support. The objective of this study was to assess ChatGPT's capacity to correctly respond to clinical pharmacy questions asked by healthcare professionals in our university hospital. Material and methods. - ChatGPT's capacity to respond correctly to the last 100 consecutive questions recorded in our clinical pharmacy database was assessed. Questions were copied from our FileMaker Pro database and pasted into ChatGPT March 14 version online platform. The generated answers were then copied verbatim into an Excel file. Two blinded clinical pharmacists reviewed all the questions and the answers given by the software. In case of disagreements, a third blinded pharmacist intervened to decide. Results. - Documentation-related issues (n n = 36) and drug administration mode (n n = 30) were preponderantly recorded. Among 69 applicable questions, the rate of correct answers varied from 30 to 57.1% depending on questions type with a global rate of 44.9%. Regarding inappropriate answers (n n = 38), 20 were incorrect, 18 gave no answers and 8 were incomplete with 8 answers belonging to 2 different categories. No better answers than the pharmacists were observed. Conclusions. - ChatGPT demonstrated a mitigated performance in answering clinical pharmacy questions. It should not replace human expertise as a high rate of inappropriate answers was highlighted. Future studies should focus on the optimization of ChatGPT for specific clinical pharmacy questions and explore the potential benefits and limitations of integrating this technology into clinical practice. (c) 2023 Acade<acute accent>mie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
引用
收藏
页码:507 / 513
页数:7
相关论文
共 50 条
  • [21] Answering clinical questions
    Chambliss, ML
    Conley, J
    JOURNAL OF FAMILY PRACTICE, 1996, 43 (02): : 140 - 144
  • [22] Answering clinical questions
    Lacasse, Miriam
    Lafortune, Valerie
    Bartlett, Lynsey
    Guimond, Jessica
    CANADIAN FAMILY PHYSICIAN, 2007, 53 : 1535 - 1536
  • [23] Evaluating the performance of ChatGPT in answering questions related to urolithiasis
    Hakan Cakir
    Ufuk Caglar
    Oguzhan Yildiz
    Arda Meric
    Ali Ayranci
    Faruk Ozgor
    International Urology and Nephrology, 2024, 56 : 17 - 21
  • [24] Evaluating the performance of ChatGPT in answering questions related to urolithiasis
    Cakir, Hakan
    Caglar, Ufuk
    Yildiz, Oguzhan
    Meric, Arda
    Ayranci, Ali
    Ozgor, Faruk
    INTERNATIONAL UROLOGY AND NEPHROLOGY, 2024, 56 (01) : 17 - 21
  • [25] ChatGPT in answering questions related to pediatric urology: Comment
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    JOURNAL OF PEDIATRIC UROLOGY, 2024, 20 (01) : 28 - 28
  • [26] AI IN HEPATOLOGY: A COMPARATIVE ANALYSIS OF CHATGPT-4, BING, AND BARD AT ANSWERING CLINICAL QUESTIONS
    Anvari, Sama
    Lee, Yung
    Jin, David S.
    Malone, Sarah
    Collins, Matthew
    GASTROENTEROLOGY, 2024, 166 (05) : S888 - S888
  • [27] Evaluating the performance of ChatGPT in answering questions related to pediatric urology
    Caglar, Ufuk
    Yildiz, Oguzhan
    Meric, Arda
    Ayranci, Ali
    Gelmis, Mucahit
    Sarilar, Omer
    Ozgor, Faruk
    JOURNAL OF PEDIATRIC UROLOGY, 2024, 20 (01) : 26.e1 - 26.e5
  • [28] EVALUATING PERFORMANCE OF CHATGPT IN ANSWERING PHARMACIST RELICENSING EXAM QUESTIONS
    Rugova, Mimoza
    Kastrati, Natyra
    Hoti, Kreshnik
    RESEARCH IN SOCIAL & ADMINISTRATIVE PHARMACY, 2024, 20 (12):
  • [29] Answering head and neck cancer questions: An assessment of ChatGPT responses
    Wei, Kimberly
    Fritz, Christian
    Rajasekaran, Karthik
    AMERICAN JOURNAL OF OTOLARYNGOLOGY, 2024, 45 (01)
  • [30] Answering questions in clinical scenarios
    Yates, Chris
    BMJ-BRITISH MEDICAL JOURNAL, 2019, 365