Artificial intelligence in dental education: ChatGPT's performance on the periodontic in-service examination

被引:10
|
作者
Danesh, Arman [1 ]
Pazouki, Hirad [2 ]
Danesh, Farzad [3 ]
Danesh, Arsalan [4 ,5 ]
Vardar-Sengul, Saynur [4 ]
机构
[1] Western Univ, Schulich Sch Med & Dent, London, ON, Canada
[2] Western Univ, Fac Sci, London, ON, Canada
[3] Elgin Mills Endodont Specialists, Richmond Hill, ON, Canada
[4] Nova Southeastern Univ, Coll Dent Med, Dept Periodontol, Davie, FL USA
[5] Nova Southeastern Univ, Coll Dent Med, Dept Periodontol, 3050 S Univ Dr, Davie, FL 33314 USA
关键词
artificial intelligence; continuing dental education; dentistry; periodontics; GPT-4;
D O I
10.1002/JPER.23-0514
中图分类号
R78 [口腔科学];
学科分类号
1003 ;
摘要
BackgroundChatGPT's (Chat Generative Pre-Trained Transformer) remarkable capacity to generate human-like output makes it an appealing learning tool for healthcare students worldwide. Nevertheless, the chatbot's responses may be subject to inaccuracies, putting forth an intense risk of misinformation. ChatGPT's capabilities should be examined in every corner of healthcare education, including dentistry and its specialties, to understand the potential of misinformation associated with the chatbot's use as a learning tool. Our investigation aims to explore ChatGPT's foundation of knowledge in the field of periodontology by evaluating the chatbot's performance on questions obtained from an in-service examination administered by the American Academy of Periodontology (AAP).MethodsChatGPT3.5 and ChatGPT4 were evaluated on 311 multiple-choice questions obtained from the 2023 in-service examination administered by the AAP. The dataset of in-service examination questions was accessed through Nova Southeastern University's Department of Periodontology. Our study excluded questions containing an image as ChatGPT does not accept image inputs.ResultsChatGPT3.5 and ChatGPT4 answered 57.9% and 73.6% of in-service questions correctly on the 2023 Periodontics In-Service Written Examination, respectively. A two-tailed t test was incorporated to compare independent sample means, and sample proportions were compared using a two-tailed chi 2 test. A p value below the threshold of 0.05 was deemed statistically significant.ConclusionWhile ChatGPT4 showed a higher proficiency compared to ChatGPT3.5, both chatbot models leave considerable room for misinformation with their responses relating to periodontology. The findings of the study encourage residents to scrutinize the periodontic information generated by ChatGPT to account for the chatbot's current limitations.
引用
收藏
页码:682 / 687
页数:6
相关论文
共 50 条
  • [21] ChatGPT-4 Surpasses Residents: A Study of Artificial Intelligence Competency in Plastic Surgery In-service Examinations and Its Advancements from ChatGPT-3.5
    Hubany, Shannon S.
    Scala, Fernanda D.
    Hashemi, Kiana
    Kapoor, Saumya
    Fedorova, Julia R.
    Vaccaro, Matthew J.
    Ridout, Rees P.
    Hedman, Casey C.
    Kellogg, Brian C.
    Barone, Angelo A. Leto
    PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN, 2024, 12 (09)
  • [22] The future of artificial intelligence: evaluating ChatGPT's performance in X sentiment prediction
    Malas, Laila
    Shawaqfeh, Ahmad
    Abushakra, Ahmad
    Discover Artificial Intelligence, 2024, 4 (01):
  • [23] Artificial intelligence powering education: ChatGPT's impact on students' academic performance through the lens of technology-to-performance chain theory
    Al-Mamary, Yaser Hasan
    Alfalah, Adel Abdulmohsen
    Shamsuddin, Alina
    Abubakar, Aliyu Alhaji
    JOURNAL OF APPLIED RESEARCH IN HIGHER EDUCATION, 2024,
  • [24] ChatGPT sitting for FRCS Urology examination: Will artificial intelligence get certified?
    Desouky, Elsayed
    Jallad, Samer
    Bhardwa, Jeetesh
    Sharma, Harbinder
    Kalsi, Jas
    JOURNAL OF CLINICAL UROLOGY, 2024,
  • [25] Dr. ChatGPT: Utilizing Artificial Intelligence in Surgical Education
    Lebhar, Michael S.
    Velazquez, Alexander
    Goza, Shelby
    Hoppe, Ian C.
    CLEFT PALATE CRANIOFACIAL JOURNAL, 2024, 61 (12): : 2067 - 2073
  • [26] ChatGPT Is Equivalent to First-Year Plastic Surgery Residents: Evaluation of ChatGPT on the Plastic Surgery In-service Examination
    Humar, Pooja
    Asaad, Malke
    Bengur, Fuat Baris
    Nguyen, Vu
    AESTHETIC SURGERY JOURNAL, 2023, 43 (12) : NP1085 - NP1089
  • [27] The next paradigm shift? ChatGPT, artificial intelligence, and medical education
    Wang, Leonard Kuan-Pei
    Paidisetty, Praneet Sai
    Cano, Alicia Magdalena
    MEDICAL TEACHER, 2023, 45 (08) : 925 - 925
  • [28] Evaluating Artificial Intelligence Competency in Education: Performance evaluation of ChatGPT in the neonatal resuscitation program exam
    Demirtas, Mehmet Semih
    Kokulu, Kamil
    Tunc, Gaffari
    RESUSCITATION, 2025, 209
  • [29] ChatGPT in Higher Education: Practical Ideas for Addressing Artificial Intelligence in Nursing Education
    Krueger, Louisa
    Clemenson, Sally
    Johnson, Ellen
    Schwarz, Laura
    JOURNAL OF NURSING EDUCATION, 2024,
  • [30] Transforming undergraduate dental education: the impact of artificial intelligenceTransforming undergraduate dental education: the impact of artificial intelligence
    Molly Harte
    Barbara Carey
    Qingmei Joy Feng
    Ali Alqarni
    Rui Albuquerque
    British Dental Journal, 2025, 238 (1) : 57 - 60