Artificial intelligence in dental education: ChatGPT's performance on the periodontic in-service examination

被引:10
|
作者
Danesh, Arman [1 ]
Pazouki, Hirad [2 ]
Danesh, Farzad [3 ]
Danesh, Arsalan [4 ,5 ]
Vardar-Sengul, Saynur [4 ]
机构
[1] Western Univ, Schulich Sch Med & Dent, London, ON, Canada
[2] Western Univ, Fac Sci, London, ON, Canada
[3] Elgin Mills Endodont Specialists, Richmond Hill, ON, Canada
[4] Nova Southeastern Univ, Coll Dent Med, Dept Periodontol, Davie, FL USA
[5] Nova Southeastern Univ, Coll Dent Med, Dept Periodontol, 3050 S Univ Dr, Davie, FL 33314 USA
关键词
artificial intelligence; continuing dental education; dentistry; periodontics; GPT-4;
D O I
10.1002/JPER.23-0514
中图分类号
R78 [口腔科学];
学科分类号
1003 ;
摘要
BackgroundChatGPT's (Chat Generative Pre-Trained Transformer) remarkable capacity to generate human-like output makes it an appealing learning tool for healthcare students worldwide. Nevertheless, the chatbot's responses may be subject to inaccuracies, putting forth an intense risk of misinformation. ChatGPT's capabilities should be examined in every corner of healthcare education, including dentistry and its specialties, to understand the potential of misinformation associated with the chatbot's use as a learning tool. Our investigation aims to explore ChatGPT's foundation of knowledge in the field of periodontology by evaluating the chatbot's performance on questions obtained from an in-service examination administered by the American Academy of Periodontology (AAP).MethodsChatGPT3.5 and ChatGPT4 were evaluated on 311 multiple-choice questions obtained from the 2023 in-service examination administered by the AAP. The dataset of in-service examination questions was accessed through Nova Southeastern University's Department of Periodontology. Our study excluded questions containing an image as ChatGPT does not accept image inputs.ResultsChatGPT3.5 and ChatGPT4 answered 57.9% and 73.6% of in-service questions correctly on the 2023 Periodontics In-Service Written Examination, respectively. A two-tailed t test was incorporated to compare independent sample means, and sample proportions were compared using a two-tailed chi 2 test. A p value below the threshold of 0.05 was deemed statistically significant.ConclusionWhile ChatGPT4 showed a higher proficiency compared to ChatGPT3.5, both chatbot models leave considerable room for misinformation with their responses relating to periodontology. The findings of the study encourage residents to scrutinize the periodontic information generated by ChatGPT to account for the chatbot's current limitations.
引用
收藏
页码:682 / 687
页数:6
相关论文
共 50 条
  • [31] The performance of artificial intelligence language models in board-style dental knowledge assessment A preliminary study on ChatGPT
    Danesh, Arman
    Pazouki, Hirad
    Danesh, Kasra
    Danesh, Farzad
    Danesh, Arsalan
    JOURNAL OF THE AMERICAN DENTAL ASSOCIATION, 2023, 154 (11): : 970 - 974
  • [32] Artificial intelligence performance in clinical neurology queries: the ChatGPT model
    Altunisik, Erman
    Firat, Yasemin Ekmekyapar
    Cengiz, Emine Kilicparlar
    Comruk, Gulsum Bayana
    NEUROLOGICAL RESEARCH, 2024, 46 (05) : 437 - 443
  • [33] Artificial Intelligence in Childcare: Assessing the Performance and Acceptance of ChatGPT Responses
    Kaneda, Yudai
    Namba, Mira
    Kaneda, Uiri
    Tanimoto, Tetsuya
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (08)
  • [34] Potential and limitations of ChatGPT and generative artificial intelligence in medical safety education
    Wang, Xin
    Liu, Xin-Qiao
    WORLD JOURNAL OF CLINICAL CASES, 2023, 11 (32)
  • [35] Reversing the threat of artificial intelligence to opportunity: a discussion of ChatGPT in tourism education
    Skavronskaya, Liubov
    Hadinejad, Arghavan
    Cotterell, Debbie
    JOURNAL OF TEACHING IN TRAVEL & TOURISM, 2023, 23 (02) : 253 - 258
  • [36] ChatGPT-Based Learning: Generative Artificial Intelligence in Medical Education
    Stretton, Brandon
    Kovoor, Joshua
    Arnold, Matthew
    Bacchi, Stephen
    MEDICAL SCIENCE EDUCATOR, 2024, 34 (01) : 215 - 217
  • [37] Impact of Democratizing Artificial Intelligence: Using ChatGPT in Medical Education and Training
    Chen, Anjun
    Chen, Wenjun
    Liu, Yanfang
    ACADEMIC MEDICINE, 2024, 99 (06) : 589 - 589
  • [38] ChatGPT-Based Learning: Generative Artificial Intelligence in Medical Education
    Brandon Stretton
    Joshua Kovoor
    Matthew Arnold
    Stephen Bacchi
    Medical Science Educator, 2024, 34 : 215 - 217
  • [39] Generative Artificial Intelligence in Patient Education: ChatGPT Takes on Hypertension Questions
    Almagazzachi, Ahmed
    Mustafa, Ahmed
    Sedeh, Ashkan Eighaei
    Gonzalez, Andres E. Vazquez
    Polianovskaia, Anastasiia
    Abood, Muhanad
    Abdelrahman, Ameer
    Arce, Veronica Muyolema
    Acob, Talar
    Saleem, Bushra
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2024, 16 (02)
  • [40] ChatGPT and Generative Artificial Intelligence for Medical Education: Potential Impact and Opportunity
    Boscardin, Christy K.
    Gin, Brian
    Golde, Polo Black
    Hauer, Karen E.
    ACADEMIC MEDICINE, 2024, 99 (01) : 22 - 27