The Performance of Artificial Intelligence Chatbot (GPT-4) on Image-Based Dermatology Certification Board Exam Questions

被引:0
|
作者
Samman, Luna [1 ]
Akuffo-Addo, Edgar [2 ]
Rao, Babar [3 ]
机构
[1] Rowan Sch Osteopath Med, Dept Dermatol, 113 Laurel Rd, Glassboro, NJ 08028 USA
[2] Univ Toronto, Dept Med, Div Dermatol, Toronto, ON, Canada
[3] Rutgers Robert Wood, Dept Dermatol, Somerset, NJ USA
关键词
GPT-4; images; dermatology; American Board of Dermatology; safety;
D O I
10.1177/12034754241266166
中图分类号
R75 [皮肤病学与性病学];
学科分类号
100206 ;
摘要
引用
收藏
页码:507 / 508
页数:2
相关论文
共 32 条
  • [21] Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study
    Noda, Masao
    Ueno, Takayoshi
    Koshu, Ryota
    Takaso, Yuji
    Shimada, Mari Dias
    Saito, Chizu
    Sugimoto, Hisashi
    Fushiki, Hiroaki
    Ito, Makoto
    Nomura, Akihiro
    Yoshizaki, Tomokazu
    JMIR MEDICAL EDUCATION, 2024, 10
  • [22] Accuracy and quality of ChatGPT-4o and Google Gemini performance on image-based neurosurgery board questions
    Sau, Suyash
    George, Derek D.
    Singh, Rohin
    Kohli, Gurkirat S.
    Li, Adam
    Jalal, Muhammad I.
    Singh, Aman
    Furst, Taylor J.
    Rahmani, Redi
    Vates, G. Edward
    Stone, Jonathan
    NEUROSURGICAL REVIEW, 2025, 48 (01)
  • [23] Advancing Medical Education: Performance of Generative Artificial Intelligence Models on Otolaryngology Board Preparation Questions With Image Analysis Insights
    Terwilliger, Emma
    Bcharah, George
    Bcharah, Hend
    Bcharah, Estefana
    Richardson, Clare
    Scheffler, Patrick
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2024, 16 (07)
  • [24] Performance evaluation of ChatGPT-4.0 and Gemini on image-based neurosurgery board practice questions: A comparative analysis
    Mcnulty, Alana M.
    Valluri, Harshitha
    Gajjar, Avi A.
    Custozzo, Amanda
    Field, Nicholas C.
    Paul, Alexandra R.
    JOURNAL OF CLINICAL NEUROSCIENCE, 2025, 134
  • [25] Checklist for Evaluation of Image-Based Artificial Intelligence Reports in Dermatology CLEAR Derm Consensus Guidelines From the International Skin Imaging Collaboration Artificial Intelligence Working Group
    Daneshjou, Roxana
    Barata, Catarina
    Betz-Stablein, Brigid
    Celebi, M. Emre
    Codella, Noel
    Combalia, Marc
    Guitera, Pascale
    Gutman, David
    Halpern, Allan
    Helba, Brian
    Kittler, Harald
    Kose, Kivanc
    Liopyris, Konstantinos
    Malvehy, Josep
    Seog, Han Seung
    Soyer, H. Peter
    Tkaczyk, Eric R.
    Tschandl, Philipp
    Rotemberg, Veronica
    JAMA DERMATOLOGY, 2022, 158 (01) : 90 - 96
  • [26] Evaluating Artificial Intelligence Competency in Education: Performance of ChatGPT-4 in the American Registry of Radiologic Technologists (ARRT) Radiography Certification Exam
    Al-Naser, Yousif
    Halka, Felobater
    Ng, Boris
    Mountford, Dwight
    Sharma, Sonali
    Niure, Ken
    Yong-Hing, Charlotte
    Khosa, Faisal
    van der Pol, Christian
    ACADEMIC RADIOLOGY, 2025, 32 (02) : 597 - 603
  • [27] Artificial intelligence performance in image-based ovarian cancer identification: A systematic review and meta-analysis
    Xu, He -Li
    Gong, Ting -Ting
    Liu, Fang-Hua
    Chen, Hong -Yu
    Xiao, Qian
    Hou, Yang
    Huang, Ying
    Sun, Hong -Zan
    Shi, Yu
    Gao, Song
    Lou, Yan
    Chang, Qing
    Zhao, Yu -Hong
    Gao, Qing-Lei
    Wu, Qi-Jun
    ECLINICALMEDICINE, 2022, 53
  • [28] Assessment of the Responses of the Artificial Intelligence-based Chatbot ChatGPT-4 to Frequently Asked Questions About Amblyopia and Childhood Myopia
    Nikdel, Mojgan
    Ghadimi, Hadi
    Tavakoli, Mehdi
    Suh, Donny W.
    JOURNAL OF PEDIATRIC OPHTHALMOLOGY & STRABISMUS, 2024, 61 (02) : 86 - 89
  • [29] Comparative performance of artificial intelligence models in rheumatology board-level questions: evaluating Google Gemini and ChatGPT-4o
    Is, Enes Efe
    Menekseoglu, Ahmet Kivanc
    CLINICAL RHEUMATOLOGY, 2024, 43 (11) : 3507 - 3513
  • [30] Comparative performance of artificial intelligence models in rheumatology board-level questions: evaluating Google Gemini and ChatGPT-4o: correspondence
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    CLINICAL RHEUMATOLOGY, 2024, 43 (12) : 4015 - 4016