A Comparative Analysis of ChatGPT and Medical Faculty Graduates in Medical Specialization Exams: Uncovering the Potential of Artificial Intelligence in Medical Education

被引:0
|
作者
Gencer, Gulcan [1 ]
Gencer, Kerem [2 ]
机构
[1] Afyonkarahisar Hlth Sci Univ, Fac Med, Dept Biostat & Med Informat, Afyonkarahisar, Turkiye
[2] Afyon Kocatepe Univ, Fac Engn, Dept Comp Engn, Afyonkarahisar, Turkiye
关键词
chatgpt; innovation; lifelong learning; learning opportunities; qualified teachers;
D O I
暂无
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background This study aims to evaluate the performance of ChatGPT in the medical specialization exam (MSE) that medical graduates take when choosing their postgraduate specialization and to reveal how artificial intelligence-supported education can increase the quality and academic success of medical education. The research aims to explore the potential applications and advantages of artificial intelligence in medical education and examine ways in which this technology can contribute to student learning and exam preparation. Methodology A total of 240 MSE questions were posed to ChatGPT, 120 of which were basic medical sciences questions and 120 were clinical medical sciences questions. A total of 18,481 people participated in the exam. The performance of medical school graduates was compared with ChatGPT-3.5 in terms of answering these questions correctly. The average score for ChatGPT-3.5 was calculated by averaging the minimum and maximum scores. Calculations were done using the R.4.0.2 environment. Results The general average score of graduates was a minimum of 7.51 in basic sciences and a maximum of 81.46, while in clinical sciences, the average was a minimum of 12.51 and a maximum of 80.78. ChatGPT, on the other hand, had an average of at least 60.00 in basic sciences and a maximum of 72.00, with an average of at least 66.25 and a maximum of 77.00 in clinical sciences. The rate of correct answers in basic medical sciences for graduates was 43.03%, while for ChatGPT was 60.00%. In clinical medical sciences, the rate of correct answers for graduates was 53.29%, while for ChatGPT was 64.16%. ChatGPT performed best with a 91.66% correct answer rate in Obstetrics and Gynecology and an 86.36% correct answer rate in Medical Microbiology. The least successful area for ChatGPT was Anatomy, with a 28.00% correct answer rate, a subfield of basic medical sciences. Graduates outperformed ChatGPT in the Anatomy and Physiology subfields. Significant differences were found in all comparisons between ChatGPT and graduates. Conclusions This study shows that artificial intelligence models such as ChatGPT can provide significant advantages to graduates, as they score higher than medical school graduates. In terms of these benefits, recommended applications include interactive support, private lessons, learning material production, personalized learning plans, self-assessment, motivation boosting, and 24/7 access, among a variety of benefits. As a result, artificial intelligence-supported education can play an important role in improving the quality of medical education and increasing student success.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Artificial Intelligence in Medical Education: Comparative Analysis of ChatGPT, Bing, and Medical Students in Germany
    Roos, Jonas
    Kasapovic, Adnan
    Jansen, Tom
    Kaczmarczyk, Robert
    [J]. JMIR MEDICAL EDUCATION, 2023, 9
  • [2] Potential and limitations of ChatGPT and generative artificial intelligence in medical safety education
    Wang, Xin
    Liu, Xin-Qiao
    [J]. WORLD JOURNAL OF CLINICAL CASES, 2023, 11 (32)
  • [3] ChatGPT and Generative Artificial Intelligence for Medical Education: Potential Impact and Opportunity
    Boscardin, Christy K.
    Gin, Brian
    Golde, Polo Black
    Hauer, Karen E.
    [J]. ACADEMIC MEDICINE, 2024, 99 (01) : 22 - 27
  • [4] The next paradigm shift? ChatGPT, artificial intelligence, and medical education
    Wang, Leonard Kuan-Pei
    Paidisetty, Praneet Sai
    Cano, Alicia Magdalena
    [J]. MEDICAL TEACHER, 2023, 45 (08) : 925 - 925
  • [5] ARTIFICIAL INTELLIGENCE IN MEDICINE: A COMPARATIVE STUDY OF CHATGPT'S LEARNING CAPABILITY IN RESOLVING MEDICAL SPECIALIZATION QUESTIONS
    Fuentes-Martin, A.
    Cilleruelo Ramos, A.
    Segura Mendez, B.
    Victoriano Soriano, G., I
    Mora Puentes, D.
    Represa Pastor, T.
    Perez Aragon, M.
    Soro Garcia, J.
    [J]. BRITISH JOURNAL OF SURGERY, 2024, 111
  • [6] Medical exams in the era of accessible artificial intelligence
    Sam, Amir H. H.
    Lam, George
    Amin, Anjali
    [J]. MEDICAL TEACHER, 2023, 45 (06) : 669 - 670
  • [7] ChatGPT-Based Learning: Generative Artificial Intelligence in Medical Education
    Stretton, Brandon
    Kovoor, Joshua
    Arnold, Matthew
    Bacchi, Stephen
    [J]. MEDICAL SCIENCE EDUCATOR, 2024, 34 (01) : 215 - 217
  • [8] Impact of Democratizing Artificial Intelligence: Using ChatGPT in Medical Education and Training
    Chen, Anjun
    Chen, Wenjun
    Liu, Yanfang
    [J]. ACADEMIC MEDICINE, 2024, 99 (06) : 589 - 589
  • [9] ChatGPT-Based Learning: Generative Artificial Intelligence in Medical Education
    Brandon Stretton
    Joshua Kovoor
    Matthew Arnold
    Stephen Bacchi
    [J]. Medical Science Educator, 2024, 34 : 215 - 217
  • [10] Artificial Intelligence in Medical Education
    Savage, Thomas Robert
    [J]. ACADEMIC MEDICINE, 2021, 96 (09) : 1229 - 1230