Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom's Taxonomy

被引:2
|
作者
Bharatha, Ambadasu [1 ,4 ]
Ojeh, Nkemcho [1 ]
Rabbi, Ahbab Mohammad Fazle [2 ]
Campbell, Michael H. [1 ]
Krishnamurthy, Kandamaran [1 ]
Layne-Yarde, Rhaheem N. A. [1 ]
Kumar, Alok [1 ]
Springer, Dale C. R. [1 ]
Connell, Kenneth L. [1 ]
Majumder, Md Anwarul Azim [1 ,3 ]
机构
[1] Univ West Indies, Fac Med Sci, Bridgetown, Barbados
[2] Univ Dhaka, Dept Populat Sci, Dhaka, Bangladesh
[3] Univ West Indies, Fac Med Sci, Med Educ, Cave Hill Campus, Bridgetown, Barbados
[4] Univ West Indies, Fac Med Sci, Pharmacol, Cave Hill Campus, Bridgetown, Barbados
关键词
artificial intelligence; ChatGPT-4's; medical students; knowledge; interpretation abilities; multiple choice questions; EDUCATION;
D O I
10.2147/AMEP.S457408
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Introduction: This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark. Methods: A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer -based testing. Results: The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4's performance, but revised Bloom's Taxonomy levels did not. A detailed association check between program levels and Bloom's taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p<0.001), reflecting a concentration of "remember -level" questions in preclinical and "evaluate -level" questions in clinical courses. Discussion: The study highlights ChatGPT-4's proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content. Conclusion: While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI's impact on medical education and student performance across educational levels and courses.
引用
收藏
页码:393 / 400
页数:8
相关论文
共 38 条
  • [31] Bloom's taxonomy: A beneficial tool for learning and assessing students' competency levels in computer programming using empirical analysis
    Ullah, Zahid
    Lajis, Adidah
    Jamjoom, Mona
    Altalhi, Abdulrahman
    Saleem, Farrukh
    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, 2020, 28 (06) : 1628 - 1640
  • [32] The Effect of Using the Lakatosian Heuristic Method to Teach the Surface Area of a Cone on Students' Achievement According to Bloom's Taxonomy Levels
    Dimitriou-Hadjichristou, Chrysoula
    Ogbonnaya, Ugorji I.
    AFRICAN JOURNAL OF RESEARCH IN MATHEMATICS SCIENCE AND TECHNOLOGY EDUCATION, 2015, 19 (02) : 185 - 198
  • [33] Generative Artificial Intelligence (ChatGPT-4) and Social Media Impact on Academic Performance and Psychological Well-Being in China's Higher Education
    Shahzad, Muhammad Farrukh
    Xu, Shuo
    Liu, Huizheng
    Zahid, Hira
    EUROPEAN JOURNAL OF EDUCATION, 2025, 60 (01)
  • [34] Evaluation of ChatGPT-4's Performance in Therapeutic Decision-Making During Multidisciplinary Oncology Meetings for Head and Neck Squamous Cell Carcinoma
    Alami, Kenza
    Willemse, Esther
    Quiriny, Marie
    Lipski, Samuel
    Laurent, Celine
    Donquier, Vincent
    Digonnet, Antoine
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2024, 16 (09)
  • [35] Reaching the top of Bloom's Taxonomy: an innovative pilot program for preclinical undergraduate and medical school students to create curricula for STEMM outreach/service-learning programs
    Salas, Abigail
    Tan, Mingqian
    Andrienko, Sofia
    Cengiz, Kayra
    Wisco, Jonathan J.
    FRONTIERS IN EDUCATION, 2024, 9
  • [36] Comparing ChatGPT's ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study
    Lin, Chao-Cheng
    Akuhata-Huntington, Zaine
    Hsu, Che-Wei
    JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS, 2023, 20
  • [37] Determining Impact of Learning Environment on Student Engagement, Bloom's Higher Order Skill Proficiency, and Exam Performance of First Year Medical Students
    Hopper, Mari K.
    Kaiser, Alexis K.
    FASEB JOURNAL, 2018, 32 (01):
  • [38] Comparing ChatGPT 4.0's Performance in Interpreting Thyroid Nodule Ultrasound Reports Using ACR-TI-RADS 2017: Analysis Across Different Levels of Ultrasound User Experience
    Wakonig, Katharina Margherita
    Barisch, Simon
    Kozarzewski, Leonard
    Dommerich, Steffen
    Lerchbaumer, Markus Herbert
    DIAGNOSTICS, 2025, 15 (05)