Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom's Taxonomy

被引:2
|
作者
Bharatha, Ambadasu [1 ,4 ]
Ojeh, Nkemcho [1 ]
Rabbi, Ahbab Mohammad Fazle [2 ]
Campbell, Michael H. [1 ]
Krishnamurthy, Kandamaran [1 ]
Layne-Yarde, Rhaheem N. A. [1 ]
Kumar, Alok [1 ]
Springer, Dale C. R. [1 ]
Connell, Kenneth L. [1 ]
Majumder, Md Anwarul Azim [1 ,3 ]
机构
[1] Univ West Indies, Fac Med Sci, Bridgetown, Barbados
[2] Univ Dhaka, Dept Populat Sci, Dhaka, Bangladesh
[3] Univ West Indies, Fac Med Sci, Med Educ, Cave Hill Campus, Bridgetown, Barbados
[4] Univ West Indies, Fac Med Sci, Pharmacol, Cave Hill Campus, Bridgetown, Barbados
关键词
artificial intelligence; ChatGPT-4's; medical students; knowledge; interpretation abilities; multiple choice questions; EDUCATION;
D O I
10.2147/AMEP.S457408
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Introduction: This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark. Methods: A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer -based testing. Results: The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4's performance, but revised Bloom's Taxonomy levels did not. A detailed association check between program levels and Bloom's taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p<0.001), reflecting a concentration of "remember -level" questions in preclinical and "evaluate -level" questions in clinical courses. Discussion: The study highlights ChatGPT-4's proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content. Conclusion: While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI's impact on medical education and student performance across educational levels and courses.
引用
收藏
页码:393 / 400
页数:8
相关论文
共 38 条
  • [21] Comparing ChatGPT-3.5 and ChatGPT-4's alignments with the German evidence-based S3 guideline for adult soft tissue sarcoma
    Li, Cheng-Peng
    Jakob, Jens
    Menge, Franka
    Reissfelder, Christoph
    Hohenberger, Peter
    Yang, Cui
    ISCIENCE, 2024, 27 (12)
  • [22] Evaluating ChatGPT-4's Performance in Identifying Radiological Anatomy in FRCR Part 1 Examination Questions
    Sarangi, Pradosh Kumar
    Datta, Suvrankar
    Panda, Braja Behari
    Panda, Swaha
    Mondal, Himel
    INDIAN JOURNAL OF RADIOLOGY AND IMAGING, 2024,
  • [23] The Performance of ChatGPT-4 and Gemini Ultra 1.0 for Quality Assurance Review in Emergency Medical Services Chest Pain Calls
    Brant-Zawadzki, Graham
    Klapthor, Brent
    Ryba, Chris
    Youngquist, Drew C.
    Burton, Brooke
    Palatinus, Helen
    Youngquist, Scott T.
    PREHOSPITAL EMERGENCY CARE, 2024,
  • [24] Optimizing ChatGPT-4's radiology performance with scale-invariant feature transform and advanced prompt engineering
    Alam, Sultan
    Rahman, Abdul
    Sohail, Shahab Saquib
    CLINICAL IMAGING, 2025, 118
  • [25] Encouragement vs. liability: How prompt engineering influences ChatGPT-4's radiology exam performance
    Nguyen, Daniel
    MacKenzie, Allison
    Kim, Young H.
    CLINICAL IMAGING, 2024, 115
  • [26] ChatGPT-4 Performance on German Continuing Medical Education-Friend or Foe (Trick or Treat)? Protocol for a Randomized Controlled Trial
    Burisch, Christian
    Bellary, Abhav
    Breuckmann, Frank
    Ehlers, Jan
    Thal, Serge C.
    Sellmann, Timur
    Godde, Daniel
    JMIR RESEARCH PROTOCOLS, 2025, 14
  • [27] Evaluation of ChatGPT's performance in Medical Education: A Comparative Analysis with Students in a Pulmonology Examination
    Cherif, Hela
    Moussa, Chirine
    Ben Rjab, Sarra
    Mokaddem, Salma
    Dhahri, Besma
    EUROPEAN RESPIRATORY JOURNAL, 2024, 64
  • [28] ChatGPT-4 Performance on USMLE Step 1 Style Questions and Its Implications for Medical Education: A Comparative Study Across Systems and Disciplines
    Garabet, Razmig
    Mackey, Brendan P.
    Cross, James
    Weingarten, Michael
    MEDICAL SCIENCE EDUCATOR, 2024, 34 (01) : 145 - 152
  • [29] ChatGPT-4 Performance on USMLE Step 1 Style Questions and Its Implications for Medical Education: A Comparative Study Across Systems and Disciplines
    Razmig Garabet
    Brendan P. Mackey
    James Cross
    Michael Weingarten
    Medical Science Educator, 2024, 34 : 145 - 152
  • [30] EXTENSIVE READING AND ITS EFFECTS ON READING COMPREHENSION PERFORMANCE OF IRANIAN EFL STUDENTS ACCORDING TO BLOOM'S TAXONOMY
    Saeedi, Ghafour
    MODERN JOURNAL OF LANGUAGE TEACHING METHODS, 2015, 5 (04): : 638 - 643