Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom's Taxonomy

被引:2
|
作者
Bharatha, Ambadasu [1 ,4 ]
Ojeh, Nkemcho [1 ]
Rabbi, Ahbab Mohammad Fazle [2 ]
Campbell, Michael H. [1 ]
Krishnamurthy, Kandamaran [1 ]
Layne-Yarde, Rhaheem N. A. [1 ]
Kumar, Alok [1 ]
Springer, Dale C. R. [1 ]
Connell, Kenneth L. [1 ]
Majumder, Md Anwarul Azim [1 ,3 ]
机构
[1] Univ West Indies, Fac Med Sci, Bridgetown, Barbados
[2] Univ Dhaka, Dept Populat Sci, Dhaka, Bangladesh
[3] Univ West Indies, Fac Med Sci, Med Educ, Cave Hill Campus, Bridgetown, Barbados
[4] Univ West Indies, Fac Med Sci, Pharmacol, Cave Hill Campus, Bridgetown, Barbados
关键词
artificial intelligence; ChatGPT-4's; medical students; knowledge; interpretation abilities; multiple choice questions; EDUCATION;
D O I
10.2147/AMEP.S457408
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Introduction: This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark. Methods: A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer -based testing. Results: The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4's performance, but revised Bloom's Taxonomy levels did not. A detailed association check between program levels and Bloom's taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p<0.001), reflecting a concentration of "remember -level" questions in preclinical and "evaluate -level" questions in clinical courses. Discussion: The study highlights ChatGPT-4's proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content. Conclusion: While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI's impact on medical education and student performance across educational levels and courses.
引用
收藏
页码:393 / 400
页数:8
相关论文
共 38 条
  • [1] Comparing Exam Performance in a Reinforced Concrete Design Course with Bloom's Taxonomy Levels
    Dymond, Benjamin Z.
    Swenty, Matthew
    Carroll, J. Chris
    JOURNAL OF CIVIL ENGINEERING EDUCATION, 2020, 146 (01):
  • [2] Comparing the performance of ChatGPT-3.5-Turbo, ChatGPT-4, and Google Bard with Iranian students in pre-internship comprehensive exams
    Zare, Soolmaz
    Vafaeian, Soheil
    Amini, Mitra
    Farhadi, Keyvan
    Vali, Mohammadreza
    Golestani, Ali
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [3] Evaluating the performance of ChatGPT-4 on the United Kingdom Medical Licensing Assessment
    Lai, U. Hin
    Wu, Keng Sam
    Hsu, Ting-Yu
    Kan, Jessie Kai Ching
    FRONTIERS IN MEDICINE, 2023, 10
  • [4] ChatGPT's Performance in Spreadsheets Modeling Assessments based on Revised Bloom's Taxonomy
    Cheong, Michelle
    31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I, 2023, : 308 - 317
  • [5] Evaluating ChatGPT-4 in medical education: an assessment of subject exam performance reveals limitations in clinical curriculum support for students
    Mackey B.P.
    Garabet R.
    Maule L.
    Tadesse A.
    Cross J.
    Weingarten M.
    Discover Artificial Intelligence, 2024, 4 (01):
  • [6] Evaluating student performance based on bloom’s taxonomy levels
    Prasad, G.N.R.
    Journal of Physics: Conference Series, 2021, 1797 (01)
  • [7] Efficiently Assessing Hands-On Learning in Fluid Mechanics at Varied Bloom's Taxonomy Levels
    Kaiphanliam, Kitana M.
    Nazempour, Arshan
    Golter, Paul B.
    Van Wie, Bernard J.
    Adesope, Olusola O.
    INTERNATIONAL JOURNAL OF ENGINEERING EDUCATION, 2021, 37 (03) : 624 - 639
  • [8] Evaluating ChatGPT-4's performance as a digital health advisor for otosclerosis surgery
    Sahin, Samil
    Erkmen, Burak
    Duymaz, Yasar Kemal
    Bayram, Furkan
    Tekin, Ahmet Mahmut
    Topsakal, Vedat
    FRONTIERS IN SURGERY, 2024, 11
  • [9] This too shall pass: the performance of ChatGPT-3.5, ChatGPT-4 and New Bing in an Australian medical licensing examination
    Kleinig, Oliver
    Gao, Christina
    Bacchi, Stephen
    MEDICAL JOURNAL OF AUSTRALIA, 2023, 219 (05)
  • [10] Comparing cognitive representations of test developers and students on a mathematics test with Bloom's taxonomy
    Gierl, MJ
    JOURNAL OF EDUCATIONAL RESEARCH, 1997, 91 (01): : 26 - 32