Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care

被引:5
|
作者
Hanci, Volkan [1 ]
Ergun, Bisar [2 ]
Gul, Sanser [3 ]
Uzun, Ozcan [4 ]
Erdemir, Ismail [5 ]
Hanci, Ferid Baran [6 ]
机构
[1] Sincan Educ & Res Hosp, Clin Anesthesiol & Crit Care, TR-06930 Ankara, Turkiye
[2] Dr Ismail Fehmi Cumalioglu City Hosp, Clin Internal Med & Crit Care, Tekirdag, Turkiye
[3] Ankara Ataturk Sanatory Educ & Res Hosp, Clin Neurosurg, Ankara, Turkiye
[4] Yalova City Hosp, Clin Internal Med & Nephrol, Yalova, Turkiye
[5] Dokuz Eylul Univ, Fac Med, Dept Anesthesiol & Crit Care, Izmir, Turkiye
[6] Ostim Tech Univ, Fac Engn, Artificial Intelligence Engn Dept, Ankara, Turkiye
关键词
artificial intelligence; Bard (R); ChatGPT (R); Copilot (R); Gemini (R); online medical information; palliative care; Perplexity (R); readability; HEALTH LITERACY; EDUCATION; INFORMATION;
D O I
10.1097/MD.0000000000039305
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
There is no study that comprehensively evaluates data on the readability and quality of "palliative care" information provided by artificial intelligence (AI) chatbots ChatGPT (R), Bard (R), Gemini (R), Copilot (R), Perplexity (R). Our study is an observational and cross-sectional original research study. In our study, AI chatbots ChatGPT (R), Bard (R), Gemini (R), Copilot (R), and Perplexity (R) were asked to present the answers of the 100 questions most frequently asked by patients about palliative care. Responses from each 5 AI chatbots were analyzed separately. This study did not involve any human participants. Study results revealed significant differences between the readability assessments of responses from all 5 AI chatbots (P < .05). According to the results of our study, when different readability indexes were evaluated holistically, the readability of AI chatbot responses was evaluated as Bard (R), Copilot (R), Perplexity (R), ChatGPT (R), Gemini (R), from easy to difficult (P < .05). In our study, the median readability indexes of each of the 5 AI chatbots Bard (R), Copilot (R), Perplexity (R), ChatGPT (R), Gemini (R) responses were compared to the "recommended" 6th grade reading level. According to the results of our study answers of all 5 AI chatbots were compared with the 6th grade reading level, statistically significant differences were observed in the all formulas (P < .001). The answers of all 5 artificial intelligence robots were determined to be at an educational level well above the 6th grade level. The modified DISCERN and Journal of American Medical Association scores was found to be the highest in Perplexity (R) (P < .001). Gemini (R) responses were found to have the highest Global Quality Scale score (P < .001). It is emphasized that patient education materials should have a readability level of 6th grade level. Of the 5 AI chatbots whose answers about palliative care were evaluated, Bard (R), Copilot (R), Perplexity (R), ChatGPT (R), Gemini (R), their current answers were found to be well above the recommended levels in terms of readability of text content. Text content quality assessment scores are also low. Both the quality and readability of texts should be brought to appropriate recommended limits.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] Evaluation of the quality and readability of ChatGPT responses to frequently asked questions about myopia in traditional Chinese language
    Chang, Li-Chun
    Sun, Chi-Chin
    Chen, Ting-Han
    Tsai, Der-Chong
    Lin, Hui-Ling
    Liao, Li-Ling
    DIGITAL HEALTH, 2024, 10
  • [32] Examination of the reliability and readability of Chatbot Generative Pretrained Transformer's (ChatGPT) responses to questions about orthodontics and the evolution of these responses in an updated version
    Kilinc, Delal Dara
    Mansiz, Duygu
    AMERICAN JOURNAL OF ORTHODONTICS AND DENTOFACIAL ORTHOPEDICS, 2024, 165 (05) : 546 - 555
  • [33] ChatGPT as a patient education tool in colorectal cancer-An in-depth assessment of efficacy, quality and readability
    Siu, Adrian H. Y.
    Gibson, Damien P.
    Chiu, Chris
    Kwok, Allan
    Irwin, Matt
    Christie, Adam
    Koh, Cherry E.
    Keshava, Anil
    Reece, Mifanwy
    Suen, Michael
    Rickard, Matthew J. F. X.
    COLORECTAL DISEASE, 2025, 27 (01)
  • [34] Longitudinal application of a standardized palliative care assessment (PCA) for quality assurance on a palliative care ward
    Storek, B.
    Heimer, J.
    Markwordt, J.
    Nickel, S.
    Thuss-Patience, P.
    Sturm, I
    ONKOLOGIE, 2012, 35 : 198 - 198
  • [35] ASSESSMENT OF THE READABILITY, RELIABILITY AND QUALITY OF ONLINE JUVENILE IDIOPATHIC ARTHRITIS PATIENT EDUCATION MATERIALS
    Spiking, J.
    Ignotus, V.
    Moran, S. P.
    RHEUMATOLOGY, 2017, 56 : 8 - 9
  • [36] Quality of life in palliative care: An analysis of quality-of-life assessment
    Locker, Lena Stephanie
    Luebbe, Andreas Stephan
    PROGRESS IN PALLIATIVE CARE, 2015, 23 (04) : 208 - 219
  • [37] Development and validation of the quality care questionnaire -palliative care (QCQ-PC): patient-reported assessment of quality of palliative care
    Yun, Young Ho
    Kang, Eun Kyo
    Lee, Jihye
    Choo, Jiyeon
    Ryu, Hyewon
    Yun, Hye-min
    Kang, Jung Hun
    Kim, Tae You
    Sim, Jin-Ah
    Kim, Yaeji
    BMC PALLIATIVE CARE, 2018, 17
  • [38] Development and validation of the quality care questionnaire –palliative care (QCQ-PC): patient-reported assessment of quality of palliative care
    Young Ho Yun
    Eun Kyo Kang
    Jihye Lee
    Jiyeon Choo
    Hyewon Ryu
    Hye-min Yun
    Jung Hun Kang
    Tae You Kim
    Jin-Ah Sim
    Yaeji Kim
    BMC Palliative Care, 17
  • [39] Validity and reliability of the palliative nursing care quality scale in Türkiye
    Kilic, Sevcan Toptas
    Oz, Fatma
    JOURNAL OF PSYCHIATRIC NURSING, 2024, 15 (02): : 149 - 156
  • [40] Assessment of the Quality and Readability of Information Provided by ChatGPT in Relation to the Use of Platelet-Rich Plasma Therapy for Osteoarthritis
    Fahy, Stephen
    Niemann, Marcel
    Boehm, Peter
    Winkler, Tobias
    Oehme, Stephan
    JOURNAL OF PERSONALIZED MEDICINE, 2024, 14 (05):