Aye, AI! ChatGPT passes multiple-choice family medicine exam

被引:33
|
作者
Morreel, Stefan [1 ]
Mathysen, Danny [1 ]
Verhoeven, Veronique [1 ]
机构
[1] Univ Antwerp, Fac Med & Hlth Sci, Antwerp, Belgium
关键词
D O I
10.1080/0142159X.2023.2187684
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
引用
收藏
页码:665 / 666
页数:2
相关论文
共 50 条
  • [31] ChatGPT Generated Otorhinolaryngology Multiple-Choice Questions: Quality, Psychometric Properties, and Suitability for Assessments
    Lotto, Cecilia
    Sheppard, Sean C.
    Anschuetz, Wilma
    Stricker, Daniel
    Molinari, Giulia
    Huwendiek, Soeren
    Anschuetz, Lukas
    OTO OPEN, 2024, 8 (03)
  • [32] Nursing students collaborating to develop multiple-choice exam revision questions: A student engagement study
    Craft, Judy A.
    Christensen, Martin
    Shaw, Natasha
    Bakon, Shannon
    NURSE EDUCATION TODAY, 2017, 59 : 6 - 11
  • [33] LACK OF PREDICTIVE-VALIDITY OF MULTIPLE-CHOICE EXAMINATION IN CLINICAL MEDICINE
    VINIEGRA, L
    MONTES, J
    GABBAI, F
    REVISTA DE INVESTIGACION CLINICA-CLINICAL AND TRANSLATIONAL INVESTIGATION, 1981, 33 (04): : 413 - 417
  • [34] Post-exam feedback with question rationales improves re-test performance of medical students on a multiple-choice exam
    Beth Levant
    Wolfram Zückert
    Anthony Paolo
    Advances in Health Sciences Education, 2018, 23 : 995 - 1003
  • [35] Post-exam feedback with question rationales improves re-test performance of medical students on a multiple-choice exam
    Levant, Beth
    Zuckert, Wolfram
    Paolo, Anthony
    ADVANCES IN HEALTH SCIENCES EDUCATION, 2018, 23 (05) : 995 - 1003
  • [36] Delayed, But Not Immediate, Feedback After Multiple-Choice Questions Increases Performance on a Subsequent Short-Answer, But Not Multiple-Choice, Exam: Evidence for the Dual-Process Theory of Memory
    Sinha, Neha
    Glass, Arnold Lewis
    JOURNAL OF GENERAL PSYCHOLOGY, 2015, 142 (02): : 118 - 134
  • [37] ChatGPT-4 Omni's superiority in answering multiple-choice oral radiology questions
    Tassoker, Melek
    BMC ORAL HEALTH, 2025, 25 (01):
  • [38] ChatGPT's ability or prompt quality: what determines the success of generating multiple-choice questions
    Kiyak, Yavuz Selim
    ACADEMIC PATHOLOGY, 2024, 11 (02):
  • [39] ChatGPT prompts for generating multiple-choice questions in medical education and evidence on their validity: a literature review
    Kiyak, Yavuz Selim
    Emekli, Emre
    POSTGRADUATE MEDICAL JOURNAL, 2024, 100 (1189) : 858 - 865
  • [40] ChatGPT to generate clinical vignettes for teaching and multiple-choice questions for assessment: A randomized controlled experiment
    Coskun, Oezlem
    Kiyak, Yavuz Selim
    Budakoglu, Isil Irem
    MEDICAL TEACHER, 2025, 47 (02) : 268 - 274