A Pilot Study Assessing the Accuracy of AI ChatGPT Responses for AL Amyloidosis

被引:0
|
作者
Saba, Ludovic [1 ]
Comenzo, Raymond [2 ]
Khouri, Jack [3 ]
Anwer, Faiz [3 ]
Landau, Heather [4 ]
Chaulagain, Chakra [1 ]
机构
[1] Cleveland Clin Florida, Dept Hematol & Med Oncol, Weston, FL 33331 USA
[2] Tufts Med Ctr, John C Davis Myeloma & Amyloid Program, Div Hematol Oncol, Boston, MA USA
[3] Cleveland Clin, Dept Hematol & Med Oncol, Main Campus, Cleveland, OH USA
[4] Mem Sloan Kettering Canc Ctr, Dept Med, Adult Bone Marrow Transplant Serv, New York, NY USA
关键词
AL amyloidosis; artificial intelligence; ChatGPT; hematology; oncology;
D O I
10.1111/ejh.14347
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
AL amyloidosis is a rare, complex and often challenging disease for both patients and healthcare providers. The availability of accurate medical information is crucial for effective diagnosis and management. In recent years, artificial intelligence (AI) has emerged as a potential tool for providing medical information. This study aims to assess the accuracy of AI ChatGPT responses for AL amyloidosis related common questions and compare them to expert opinions. A scoring system was developed to evaluate responses provided by five participating expert physicians. AI ChatGPT demonstrated an overall accuracy rate of 82% in answering AL amyloidosis-related questions. Responses on prognosis and patient support received the highest scores (100%), while questions related to treatment options showed lower accuracy (30%-60%). The results indicate that while the AI ChatGPT demonstrates overall accuracy, there are areas for improvement and potential discrepancies compared to expert opinions. These findings highlight the importance of ongoing refinement and validation of AI-powered medical tools and cannot yet replace the advice of experts in the disease.
引用
收藏
页码:495 / 499
页数:5
相关论文
共 50 条
  • [41] Can ChatGPT Be a Certified Accountant? Assessing the Responses of ChatGPT for the Professional Access Exam in Portugal
    Albuquerque, Fabio
    dos Santos, Paula Gomes
    ADMINISTRATIVE SCIENCES, 2024, 14 (07)
  • [42] AI-supported decision-making in obstetrics - a feasibility study on the medical accuracy and reliability of ChatGPT
    Bader, Simon
    Schneider, Michael O.
    Psilopatis, Iason
    Anetsberger, Daniel
    Emons, Julius
    Kehl, Sven
    ZEITSCHRIFT FUR GEBURTSHILFE UND NEONATOLOGIE, 2025, 229 (01): : 15 - 21
  • [43] Response accuracy of ChatGPT 3.5 Copilot and Gemini in interpreting biochemical laboratory data a pilot study
    Kaftan, Ahmed Naseer
    Hussain, Majid Kadhum
    Naser, Farah Hasson
    SCIENTIFIC REPORTS, 2024, 14 (01) : 8233
  • [44] The utility and accuracy of ChatGPT in providing post-operative instructions following tonsillectomy: A pilot study
    Dhar, Sarit
    Kothari, Dhruv
    Vasquez, Missael
    Clarke, Travis
    Maroda, Andrew
    McClain, Wade G.
    Sheyn, Anthony
    Tuliszewski, Robert M.
    Tang, Dennis M.
    Rangarajan, Sanjeet, V
    INTERNATIONAL JOURNAL OF PEDIATRIC OTORHINOLARYNGOLOGY, 2024, 179
  • [45] ChatGPT and artificial hallucinations in stem cell research: assessing the accuracy of generated references - a preliminary study
    Sharun, Khan
    Banu, S. Amitha
    Pawde, Abhijit M.
    Kumar, Rohit
    Akash, Shopnil
    Dhama, Kuldeep
    Pal, Amar
    ANNALS OF MEDICINE AND SURGERY, 2023, 85 (10): : 5275 - 5278
  • [46] AI Unreliable Answers: A Case Study on ChatGPT
    Amaro, Ilaria
    Della Greca, Attilio
    Francese, Rita
    Tortora, Genoveffa
    Tucci, Cesare
    ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2023, PT II, 2023, 14051 : 23 - 40
  • [47] Learning based on human preferences: A pilot study regarding the student's perception of the AI and the use of ChatGPT
    Dumitrescu, Dalina
    INTERACCION Y PERSPECTIVA, 2024, 14 (03):
  • [48] Accuracy of Responses by the Language Model ChatGPT and Bariatric Surgery: Comment
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    OBESITY SURGERY, 2023, 33 (08) : 2596 - 2596
  • [49] Evaluate the accuracy of ChatGPT’s responses to diabetes questions and misconceptions
    Chunling Huang
    Lijun Chen
    Huibin Huang
    Qingyan Cai
    Ruhai Lin
    Xiaohong Wu
    Yong Zhuang
    Zhengrong Jiang
    Journal of Translational Medicine, 21
  • [50] Evaluate the accuracy of ChatGPT's responses to diabetes questions and misconceptions
    Huang, Chunling
    Chen, Lijun
    Huang, Huibin
    Cai, Qingyan
    Lin, Ruhai
    Wu, Xiaohong
    Zhuang, Yong
    Jiang, Zhengrong
    JOURNAL OF TRANSLATIONAL MEDICINE, 2023, 21 (01)