Are ChatGPT's knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study

被引:190
|
作者
Huh, Sun [1 ,2 ]
机构
[1] Hallym Univ, Coll Med, Dept Parasitol, Chunchon, South Korea
[2] Hallym Univ, Inst Med Educ, Coll Med, Chunchon, South Korea
关键词
Artificial intelligence; Educational measurement; Knowledge; Medical students; Republic of Korea d bl d l;
D O I
10.3352/jeehp.2023.20.1
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT's overall performance score, its correct answer rate by the items' knowledge level, and the acceptability of its explanations of the items. ChatGPT's performance was lower than that of the medical students, and ChatGPT's correct answer rate was not related to the items' knowl-edg level. However, ther was relationship beween acceptable explanations and crrect answers. In conclusion, ChatGPT's knowledge and Are ChatGPT's knowle dge a nd interpretation ability comparable to those of medical students in Koeainterpretationfor takingabilitya for thiparas itologyparasitology examination?:examinationa wre notdescriptiveyet studycomparable to those of medical students in Korea.
引用
收藏
页数:5
相关论文
共 5 条
  • [1] Is ChatGPT's Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
    Ghosh, Abhra
    Jindal, Nandita Maini
    Gupta, Vikram K.
    Bansal, Ekta
    Bajwa, Navjot Kaur
    Sett, Abhishek
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (10)
  • [2] Are Different Versions of ChatGPT's Ability Comparable to the Clinical Diagnosis Presented in Case Reports? A Descriptive Study
    Chen, Jingfang
    Liu, Linlin
    Ruan, Shujin
    Li, Mengjun
    Yin, Chengliang
    JOURNAL OF MULTIDISCIPLINARY HEALTHCARE, 2023, 16 : 3825 - 3831
  • [3] Use of Eye-Tracking Technology by Medical Students Taking the Objective Structured Clinical Examination: Descriptive Study
    Grima-Murcia, M. D.
    Sanchez-Ferrer, Francisco
    Manuel Ramos-Rincon, Jose
    Fernandez, Eduardo
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2020, 22 (08)
  • [4] EVALUATION OF CHATGPT'S ABILITY IN BASIC DERMATOLOGY: A COMPARATIVE STUDY WITH FINAL-YEAR MEDICAL STUDENTS
    Mohta, Alpana
    ACTA DERMATO-VENEREOLOGICA, 2023, 103 : 38 - 38
  • [5] Comparing ChatGPT's ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study
    Lin, Chao-Cheng
    Akuhata-Huntington, Zaine
    Hsu, Che-Wei
    JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS, 2023, 20