THE ABILITY OF ARTIFICIAL INTELLIGENCE CHATBOTS ChatGPT AND GOOGLE BARD TO ACCURATELY CONVEY PREOPERATIVE INFORMATION FOR PATIENTS UNDERGOING OPHTHALMIC SURGERIES

被引:2
|
作者
Patil, Nikhil S. [1 ]
Huang, Ryan [2 ]
Mihalache, Andrew [2 ]
Kisilevsky, Eli [3 ,4 ]
Kwok, Jason [3 ]
Popovic, Marko M. [3 ]
Nassrallah, Georges [3 ,5 ]
Chan, Clara [3 ]
Mallipatna, Ashwin [3 ,5 ]
Kertes, Peter J. [3 ,6 ]
Muni, Rajeev H. [3 ,7 ]
机构
[1] McMaster Univ, Michael G DeGroote Sch Med, Hamilton, ON, Canada
[2] Univ Toronto, Temerty Fac Med, Toronto, ON, Canada
[3] Univ Toronto, Dept Ophthalmol & Vis Sci, Toronto, ON, Canada
[4] Univ Toronto, St Josephs Hlth Ctr, Unity Hlth, Toronto, ON, Canada
[5] Univ Toronto, Hosp Sick Children, Dept Ophthalmol, Toronto, ON, Canada
[6] Sunnybrook Hlth Sci Ctr, John & Liz Tory Eye Ctr, Toronto, ON, Canada
[7] St Michaels Hosp, Dept Ophthalmol, Unity Hlth Toronto, 30 Bond St,Donnelly Wing,8th Floor, Toronto, ON M5B 1W8, Canada
关键词
Google Bard; Bard; artificial intelligence; chatbot; ChatGPT; ophthalmology; informed consent; INFORMED-CONSENT;
D O I
10.1097/IAE.0000000000004044
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Introduction: To determine whether the two popular artificial intelligence chatbots, ChatGPT and Bard, can provide high-quality information concerning procedure description, risks, benefits, and alternatives of various ophthalmic surgeries. Methods: ChatGPT and Bard were prompted with questions pertaining to the description, potential risks, benefits, alternatives, and implications of not proceeding with various surgeries in different subspecialties of ophthalmology. Six common ophthalmic procedures were included in the authors' analysis. Two comprehensive ophthalmologists and one subspecialist graded each response independently using a 5-point Likert scale. Results: Likert grading for accuracy was significantly higher for ChatGPT in comparison with Bard (4.5 +/- 0.6 vs. 3.8 +/- 0.8, P < 0.0001). Generally, ChatGPT performed better than Bard even when questions were stratified by the type of ophthalmic surgery. There was no significant difference between ChatGPT and Bard for response length (2,104.7 +/- 271.4 characters vs. 2,441.0 +/- 633.9 characters, P = 0.12). ChatGPT responded significantly slower than Bard (46.0 +/- 3.0 vs. 6.6 +/- 1.2 seconds, P < 0.0001). Conclusion: Both ChatGPT and Bard may offer accessible and high-quality information relevant to the informed consent process for various ophthalmic procedures. Nonetheless, both artificial intelligence chatbots overlooked the probability of adverse events, hence limiting their potential and introducing patients to information that may be difficult to interpret.
引用
收藏
页码:950 / 953
页数:4
相关论文
共 11 条
  • [1] Performance of artificial intelligence chatbots in sleep medicine certification board exams: ChatGPT versus Google Bard
    Cheong, Ryan Chin Taw
    Pang, Kenny Peter
    Unadkat, Samit
    Mcneillis, Venkata
    Williamson, Andrew
    Joseph, Jonathan
    Randhawa, Premjit
    Andrews, Peter
    Paleri, Vinidh
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2024, 281 (04) : 2137 - 2143
  • [2] Performance of artificial intelligence chatbots in sleep medicine certification board exams: ChatGPT versus Google Bard
    Ryan Chin Taw Cheong
    Kenny Peter Pang
    Samit Unadkat
    Venkata Mcneillis
    Andrew Williamson
    Jonathan Joseph
    Premjit Randhawa
    Peter Andrews
    Vinidh Paleri
    European Archives of Oto-Rhino-Laryngology, 2024, 281 : 2137 - 2143
  • [3] Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard
    Cheong, Ryan Chin Taw
    Unadkat, Samit
    Mcneillis, Venkata
    Williamson, Andrew
    Joseph, Jonathan
    Randhawa, Premjit
    Andrews, Peter
    Paleri, Vinidh
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2024, 281 (02) : 985 - 993
  • [4] Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard
    Ryan Chin Taw Cheong
    Samit Unadkat
    Venkata Mcneillis
    Andrew Williamson
    Jonathan Joseph
    Premjit Randhawa
    Peter Andrews
    Vinidh Paleri
    European Archives of Oto-Rhino-Laryngology, 2024, 281 : 985 - 993
  • [5] An evaluation of orthodontic information quality regarding artificial intelligence (AI) chatbot technologies: A comparison of ChatGPT and google BARD
    Arslan, Can
    Kahya, Kaan
    Cesur, Emre
    Cakan, Derya Germec
    AUSTRALASIAN ORTHODONTIC JOURNAL, 2024, 40 (01): : 149 - 157
  • [6] Reliability and accuracy of artificial intelligence ChatGPT in providing information on ophthalmic diseases and management to patients
    Cappellani, Francesco
    Card, Kevin R.
    Shields, Carol L.
    Pulido, Jose S.
    Haller, Julia A.
    EYE, 2024, 38 (07) : 1368 - 1373
  • [7] Reliability and accuracy of artificial intelligence ChatGPT in providing information on ophthalmic diseases and management to patients
    Francesco Cappellani
    Kevin R. Card
    Carol L. Shields
    Jose S. Pulido
    Julia A. Haller
    Eye, 2024, 38 : 1368 - 1373
  • [8] Evaluation of the Current Status of Artificial Intelligence for Endourology Patient Education: A Blind Comparison of ChatGPT and Google Bard Against Traditional Information Resources
    Connors, Christopher
    Gupta, Kavita
    Khusid, Johnathan A.
    Khargi, Raymond
    Yaghoubian, Alan J.
    Levy, Micah
    Gallante, Blair
    Atallah, William
    Gupta, Mantu
    JOURNAL OF ENDOUROLOGY, 2024,
  • [9] Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift
    Hopkins, Ashley M.
    Logan, Jessica M.
    Kichenadasse, Ganessan
    Sorich, Michael J.
    JNCI CANCER SPECTRUM, 2023, 7 (02)
  • [10] Using Artificial Intelligence Chatbots as a Radiologic Decision-Making Tool for Liver Imaging: Do ChatGPT and Bard Communicate Information Consistent With the ACR Appropriateness Criteria?
    Patil, Nikhil S.
    Huang, Ryan S.
    van der Pol, Christian B.
    Larocque, Natasha
    JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2023, 20 (10) : 1010 - 1013