Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care

被引:4
|
作者
Hanci, Volkan [1 ]
Ergun, Bisar [2 ]
Gul, Sanser [3 ]
Uzun, Ozcan [4 ]
Erdemir, Ismail [5 ]
Hanci, Ferid Baran [6 ]
机构
[1] Sincan Educ & Res Hosp, Clin Anesthesiol & Crit Care, TR-06930 Ankara, Turkiye
[2] Dr Ismail Fehmi Cumalioglu City Hosp, Clin Internal Med & Crit Care, Tekirdag, Turkiye
[3] Ankara Ataturk Sanatory Educ & Res Hosp, Clin Neurosurg, Ankara, Turkiye
[4] Yalova City Hosp, Clin Internal Med & Nephrol, Yalova, Turkiye
[5] Dokuz Eylul Univ, Fac Med, Dept Anesthesiol & Crit Care, Izmir, Turkiye
[6] Ostim Tech Univ, Fac Engn, Artificial Intelligence Engn Dept, Ankara, Turkiye
关键词
artificial intelligence; Bard (R); ChatGPT (R); Copilot (R); Gemini (R); online medical information; palliative care; Perplexity (R); readability; HEALTH LITERACY; EDUCATION; INFORMATION;
D O I
10.1097/MD.0000000000039305
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
There is no study that comprehensively evaluates data on the readability and quality of "palliative care" information provided by artificial intelligence (AI) chatbots ChatGPT (R), Bard (R), Gemini (R), Copilot (R), Perplexity (R). Our study is an observational and cross-sectional original research study. In our study, AI chatbots ChatGPT (R), Bard (R), Gemini (R), Copilot (R), and Perplexity (R) were asked to present the answers of the 100 questions most frequently asked by patients about palliative care. Responses from each 5 AI chatbots were analyzed separately. This study did not involve any human participants. Study results revealed significant differences between the readability assessments of responses from all 5 AI chatbots (P < .05). According to the results of our study, when different readability indexes were evaluated holistically, the readability of AI chatbot responses was evaluated as Bard (R), Copilot (R), Perplexity (R), ChatGPT (R), Gemini (R), from easy to difficult (P < .05). In our study, the median readability indexes of each of the 5 AI chatbots Bard (R), Copilot (R), Perplexity (R), ChatGPT (R), Gemini (R) responses were compared to the "recommended" 6th grade reading level. According to the results of our study answers of all 5 AI chatbots were compared with the 6th grade reading level, statistically significant differences were observed in the all formulas (P < .001). The answers of all 5 artificial intelligence robots were determined to be at an educational level well above the 6th grade level. The modified DISCERN and Journal of American Medical Association scores was found to be the highest in Perplexity (R) (P < .001). Gemini (R) responses were found to have the highest Global Quality Scale score (P < .001). It is emphasized that patient education materials should have a readability level of 6th grade level. Of the 5 AI chatbots whose answers about palliative care were evaluated, Bard (R), Copilot (R), Perplexity (R), ChatGPT (R), Gemini (R), their current answers were found to be well above the recommended levels in terms of readability of text content. Text content quality assessment scores are also low. Both the quality and readability of texts should be brought to appropriate recommended limits.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Reliability and readability analysis of ChatGPT-4 and Google Bard as a patient information source for the most commonly applied radionuclide treatments in cancer patients
    San, H.
    Bayrakci, O.
    Cagdas, B.
    Serdengecti, M.
    Alagoz, E.
    REVISTA ESPANOLA DE MEDICINA NUCLEAR E IMAGEN MOLECULAR, 2024, 43 (04):
  • [22] Evaluation of Responses to Questions About Keratoconus Using ChatGPT-4.0, Google Gemini and Microsoft Copilot: A Comparative Study of Large Language Models on Keratoconus
    Demir, Suleyman
    EYE & CONTACT LENS-SCIENCE AND CLINICAL PRACTICE, 2025, 51 (03): : e107 - e111
  • [23] Debunking Palliative Care Myths: Assessing the Performance of Artificial Intelligence Chatbots (ChatGPT vs. Google Gemini)
    Gondode, Prakash Gyandev
    Mahor, Vaishali
    Rani, Deepa
    Ramkumar, Rupavath
    Yadav, Pooja
    INDIAN JOURNAL OF PALLIATIVE CARE, 2024, 30 (03) : 284 - 287
  • [24] Evaluation of the Quality and Reliability of ChatGPT-4's Responses on Allergen Immunotherapy Using Validated Assessment Tools
    Cherrez-Ojeda, Ivan
    Zuberbier, Torsten
    Rodas-Valero, Gabriela
    Sanchez, Jorge Mario
    Rudenko, Michael
    Dramburg, Stephanie
    Demoly, Pascal
    Caimmi, Davide
    Gómez, René Maximiliano
    Ramon, German D.
    Fouda, Ghada E.
    Quimby, Kim R.
    Chong-Neto, Herberto
    Llosa, Oscar Calderon
    Larco, Jose Ignacio
    Ortega, Olga Patricia Monge
    Pfaar, Oliver
    Bousquet, Jean
    Robles-Velasco, Karla
    SSRN,
  • [25] Assessment of the quality, readability, and usefulness of ChatGPT generated medical information for ten common cancer types
    Beyoglu, Muhammed Mustafa
    Kaya, Erhan
    Karabulut, Esra
    UNIVERSAL ACCESS IN THE INFORMATION SOCIETY, 2024,
  • [26] QUALITY-OF-LIFE ASSESSMENT IN PALLIATIVE CARE
    FINLAY, IG
    DUNLOP, R
    ANNALS OF ONCOLOGY, 1994, 5 (01) : 13 - 18
  • [27] Information Quality and Readability: ChatGPT's Responses to the Most Common Questions About Spinal Cord Injury
    Temel, Mustafa Huseyin
    Erden, Yakup
    Bagcier, Fatih
    WORLD NEUROSURGERY, 2024, 181 : E1138 - E1144
  • [28] Quality-of-life assessment in palliative care
    Kaasa, S
    Loge, JH
    LANCET ONCOLOGY, 2002, 3 (03): : 175 - 182
  • [29] Quality of life assessment and outcome of palliative care
    Paci, E
    Miccinesi, G
    Toscani, F
    Tamburini, M
    Brunelli, C
    Constantini, M
    Peruselli, C
    Di Giulio, P
    Gallucci, M
    Addington-Hall, J
    Higginson, IJ
    JOURNAL OF PAIN AND SYMPTOM MANAGEMENT, 2001, 21 (03) : 179 - 188
  • [30] Quality assessment of palliative home care in Italy
    Scaccabarozzi, Gianlorenzo
    Lovaglio, Pietro Giorgio
    Limonta, Fabrizio
    Floriani, Maddalena
    Pellegrini, Giacomo
    JOURNAL OF EVALUATION IN CLINICAL PRACTICE, 2017, 23 (04) : 725 - 733