MedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models

被引:0
|
作者
Cai, Yan [1 ]
Wang, Linlin [1 ,2 ]
Wang, Ye [1 ]
de Melo, Gerard [3 ,4 ]
Zhang, Ya [2 ,5 ]
Wang, Yanfeng [2 ,5 ]
He, Liang [1 ]
机构
[1] East China Normal Univ, Shanghai, Peoples R China
[2] Shanghai Artificial Intelligence Lab, Shanghai, Peoples R China
[3] Hasso Plattner Inst, Potsdam, Germany
[4] Univ Potsdam, Potsdam, Germany
[5] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of various medical large language models (LLMs) in the medical domain has highlighted the need for unified evaluation standards, as manual evaluation of LLMs proves to be time-consuming and labor-intensive. To address this issue, we introduce MedBench, a comprehensive benchmark for the Chinese medical domain, comprising 40,041 questions sourced from authentic examination exercises and medical reports of diverse branches of medicine. In particular, this benchmark is composed of four key components: the Chinese Medical Licensing Examination, the Resident Standardization Training Examination, the Doctor In-Charge Qualification Examination, and real-world clinic cases encompassing examinations, diagnoses, and treatments. MedBench replicates the educational progression and clinical practice experiences of doctors in Mainland China, thereby establishing itself as a credible benchmark for assessing the mastery of knowledge and reasoning abilities in medical language learning models. We perform extensive experiments and conduct an in-depth analysis from diverse perspectives, which culminate in the following findings: (1) Chinese medical LLMs underperform on this benchmark, highlighting the need for significant advances in clinical knowledge and diagnostic precision. (2) Several general-domain LLMs surprisingly possess considerable medical knowledge. These findings elucidate both the capabilities and limitations of LLMs within the context of MedBench, with the ultimate goal of aiding the medical research community.
引用
收藏
页码:17709 / 17717
页数:9
相关论文
共 50 条
  • [21] A large-scale benchmark of gene prioritization methods
    Dimitri Guala
    Erik L. L. Sonnhammer
    [J]. Scientific Reports, 7
  • [22] TGEA 2.0: A Large-Scale Diagnostically Annotated Dataset with Benchmark Tasks for Text Generation of Pretrained Language Models
    Ge, Huibin
    Zhao, Xiaohu
    Liu, Chuang
    Zeng, Yulong
    Liu, Qun
    Xiong, Deyi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [23] A large-scale benchmark of gene prioritization methods
    Guala, Dimitri
    Sonnhammer, Erik L. L.
    [J]. SCIENTIFIC REPORTS, 2017, 7
  • [24] A Large-Scale Benchmark for Food Image Segmentation
    Wu, Xiongwei
    Fu, Xin
    Liu, Ying
    Lim, Ee-Peng
    Hoi, Steven C. H.
    Sun, Qianru
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 506 - 515
  • [25] Evaluating large-language-model chatbots to engage communities in large-scale design projects
    Dortheimer, Jonathan
    Martelaro, Nik
    Sprecher, Aaron
    Schubert, Gerhard
    [J]. AI EDAM-ARTIFICIAL INTELLIGENCE FOR ENGINEERING DESIGN ANALYSIS AND MANUFACTURING, 2024, 38
  • [26] On the Multilingual Capabilities of Very Large-Scale English Language Models
    Armengol-Estape, Jordi
    de Gibert Bonet, Ona
    Melero, Maite
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 3056 - 3068
  • [27] Limits of Detecting Text Generated by Large-Scale Language Models
    Varshney, Lav R.
    Keskar, Nitish Shirish
    Socher, Richard
    [J]. 2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [28] Large-Scale Random Forest Language Models for Speech Recognition
    Su, Yi
    Jelinek, Frederick
    Khudanpur, Sanjeev
    [J]. INTERSPEECH 2007: 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, VOLS 1-4, 2007, : 945 - 948
  • [29] NetBench: A LARGE-SCALE AND COMPREHENSIVE NETWORK TRAFFIC BENCHMARK DATASET FOR FOUNDATION MODELS
    Department of Computer Science William & Mary, United States
    [J]. arXiv,
  • [30] PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change
    Valmeekam, Karthik
    Marquez, Matthew
    Olmo, Alberto
    Sreedharan, Sarath
    Kambhampati, Subbarao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,