The performance of AI in medical examinations: an exploration of ChatGPT in ultrasound medical education

被引:0
|
作者
Hong, Dao-Rong [1 ]
Huang, Chun-Yan [2 ]
机构
[1] Fujian Med Univ, Affiliated Hosp 2, Dept Ultrasonog, Quanzhou, Fujian, Peoples R China
[2] Fujian Med Univ, Affiliated Hosp 2, Dept Gen Practice, Quanzhou, Fujian, Peoples R China
关键词
ChatGPT; ultrasound medicine; medical education; artificial intelligence (AI); examination;
D O I
10.3389/fmed.2024.1472006
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Objective This study aims to evaluate the accuracy of ChatGPT in the context of China's Intermediate Professional Technical Qualification Examination for Ultrasound Medicine, exploring its potential role in ultrasound medical education.Methods A total of 100 questions, comprising 70 single-choice and 30 multiple-choice questions, were selected from the examination's question bank. These questions were categorized into four groups: basic knowledge, relevant clinical knowledge, professional knowledge, and professional practice. ChatGPT versions 3.5 and 4.0 were tested, and accuracy was measured based on the proportion of correct answers for each version.Results ChatGPT 3.5 achieved an accuracy of 35.7% for single-choice and 30.0% for multiple-choice questions, while version 4.0 improved to 61.4 and 50.0%, respectively. Both versions performed better in basic knowledge questions but showed limitations in professional practice-related questions. Version 4.0 demonstrated significant improvements across all categories compared to version 3.5, but it still underperformed when compared to resident doctors in certain areas.Conclusion While ChatGPT did not meet the passing criteria for the Intermediate Professional Technical Qualification Examination in Ultrasound Medicine, its strong performance in basic medical knowledge suggests potential as a supplementary tool in medical education. However, its limitations in addressing professional practice tasks need to be addressed.
引用
收藏
页数:5
相关论文
共 50 条
  • [21] LABORATORIES AND EXAMINATIONS IN MEDICAL-EDUCATION
    MUENCH, KH
    BIOESSAYS, 1984, 1 (04) : 180 - 181
  • [22] ORAL EXAMINATIONS IN MEDICAL-EDUCATION
    JAYAWICKRAMARAJAH, PT
    MEDICAL EDUCATION, 1985, 19 (04) : 290 - 293
  • [23] Performance of ChatGPT and Bard on the medical licensing examinations varies across different cultures: a comparison study
    Chen, Yikai
    Huang, Xiujie
    Yang, Fangjie
    Lin, Haiming
    Lin, Haoyu
    Zheng, Zhuoqun
    Liang, Qifeng
    Zhang, Jinhai
    Li, Xinxin
    BMC MEDICAL EDUCATION, 2024, 24 (01)
  • [24] ChatGPT's performance in German OB/GYN exams - paving the way for AI-enhanced medical education and clinical practice
    Riedel, Maximilian
    Kaefinger, Katharina
    Stuehrenberg, Antonia
    Ritter, Viktoria
    Amann, Niklas
    Graf, Anna
    Recker, Florian
    Klein, Evelyn
    Kiechle, Marion
    Riedel, Fabian
    Meyer, Bastian
    FRONTIERS IN MEDICINE, 2023, 10
  • [25] ChatGPT in Medical Education: A Precursor for Automation Bias?
    Nguyen, Tina
    JMIR MEDICAL EDUCATION, 2024, 10
  • [26] The Intersection of ChatGPT, Clinical Medicine, and Medical Education
    Wong, Rebecca Shin-Yee
    Ming, Long Chiau
    Ali, Raja Affendi Raja
    JMIR MEDICAL EDUCATION, 2023, 9
  • [27] The application of ChatGPT in medical education: prospects and challenges
    Wu, Zhou
    Li, Sheng
    Zhao, Xiaofen
    INTERNATIONAL JOURNAL OF SURGERY, 2025, 111 (01) : 1652 - 1653
  • [28] Adapting ChatGPT for Color Blindness in Medical Education
    Wang, Jinge
    Yu, Thomas C.
    Kolodney, Michael S.
    Perrotta, Peter L.
    Hu, Gangqing
    ANNALS OF BIOMEDICAL ENGINEERING, 2025, 53 (01) : 5 - 8
  • [29] ChatGPT as an innovative heutagogical tool in medical education
    Saleem, Nudrat
    Mufti, Tabish
    Sohail, Shahab Saquib
    Madsen, Dag Oivind
    COGENT EDUCATION, 2024, 11 (01):
  • [30] Practical Applications of ChatGPT in Undergraduate Medical Education
    Tsang, Ricky
    JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT, 2023, 10