Assessing the Risk of Bias in Randomized Clinical Trials With Large Language Models

被引:12
|
作者
Lai, Honghao [1 ,2 ]
Ge, Long [1 ,2 ,3 ]
Sun, Mingyao [4 ]
Pan, Bei [5 ]
Huang, Jiajie [6 ]
Hou, Liangying [5 ,7 ]
Yang, Qiuyu [1 ,2 ]
Liu, Jiayi [1 ,2 ]
Liu, Jianing [6 ]
Ye, Ziying [1 ,2 ]
Xia, Danni [1 ,2 ]
Zhao, Weilong [1 ,2 ]
Wang, Xiaoman [5 ]
Liu, Ming [5 ,7 ]
Talukdar, Jhalok Ronjan [7 ]
Tian, Jinhui [3 ,5 ]
Yang, Kehu [3 ,5 ]
Estill, Janne [5 ,8 ]
机构
[1] Lanzhou Univ, Sch Publ Hlth, Dept Hlth Policy & Management, Lanzhou, Peoples R China
[2] Lanzhou Univ, Evidence Based Social Sci Res Ctr, Sch Publ Hlth, 199 Donggang West Rd, Lanzhou 730000, Peoples R China
[3] Key Lab Evidence Based Med & Knowledge Translat Ga, Lanzhou, Peoples R China
[4] Lanzhou Univ, Evidence Based Nursing Ctr, Sch Nursing, Lanzhou, Peoples R China
[5] Lanzhou Univ, Sch Basic Med Sci, Evidence Based Med Ctr, Lanzhou, Peoples R China
[6] Gansu Univ Chinese Med, Coll Nursing, Lanzhou, Peoples R China
[7] McMaster Univ, Dept Hlth Res Methods Evidence & Impact, Hamilton, ON, Canada
[8] Univ Geneva, Inst Global Hlth, Geneva, Switzerland
关键词
DOUBLE-BLIND; PRIMARY INSOMNIA; INTERRATER RELIABILITY; REBOUND INSOMNIA; WEIGHT-LOSS; LONG-TERM; RED MEAT; EFFICACY; SAFETY; DIET;
D O I
10.1001/jamanetworkopen.2024.12687
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Importance Large language models (LLMs) may facilitate the labor-intensive process of systematic reviews. However, the exact methods and reliability remain uncertain. Objective To explore the feasibility and reliability of using LLMs to assess risk of bias (ROB) in randomized clinical trials (RCTs). Design, Setting, and Participants A survey study was conducted between August 10, 2023, and October 30, 2023. Thirty RCTs were selected from published systematic reviews. Main Outcomes and Measures A structured prompt was developed to guide ChatGPT (LLM 1) and Claude (LLM 2) in assessing the ROB in these RCTs using a modified version of the Cochrane ROB tool developed by the CLARITY group at McMaster University. Each RCT was assessed twice by both models, and the results were documented. The results were compared with an assessment by 3 experts, which was considered a criterion standard. Correct assessment rates, sensitivity, specificity, and F1 scores were calculated to reflect accuracy, both overall and for each domain of the Cochrane ROB tool; consistent assessment rates and Cohen kappa were calculated to gauge consistency; and assessment time was calculated to measure efficiency. Performance between the 2 models was compared using risk differences. Results Both models demonstrated high correct assessment rates. LLM 1 reached a mean correct assessment rate of 84.5% (95% CI, 81.5%-87.3%), and LLM 2 reached a significantly higher rate of 89.5% (95% CI, 87.0%-91.8%). The risk difference between the 2 models was 0.05 (95% CI, 0.01-0.09). In most domains, domain-specific correct rates were around 80% to 90%; however, sensitivity below 0.80 was observed in domains 1 (random sequence generation), 2 (allocation concealment), and 6 (other concerns). Domains 4 (missing outcome data), 5 (selective outcome reporting), and 6 had F1 scores below 0.50. The consistent rates between the 2 assessments were 84.0% for LLM 1 and 87.3% for LLM 2. LLM 1's kappa exceeded 0.80 in 7 and LLM 2's in 8 domains. The mean (SD) time needed for assessment was 77 (16) seconds for LLM 1 and 53 (12) seconds for LLM 2. Conclusions In this survey study of applying LLMs for ROB assessment, LLM 1 and LLM 2 demonstrated substantial accuracy and consistency in evaluating RCTs, suggesting their potential as supportive tools in systematic review processes.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Assessing Inference Time in Large Language Models
    Walkowiak, Bartosz
    Walkowiak, Tomasz
    SYSTEM DEPENDABILITY-THEORY AND APPLICATIONS, DEPCOS-RELCOMEX 2024, 2024, 1026 : 296 - 305
  • [42] Assessing the Strengths and Weaknesses of Large Language Models
    Lappin, Shalom
    JOURNAL OF LOGIC LANGUAGE AND INFORMATION, 2024, 33 (01) : 9 - 20
  • [43] Assessing the Strengths and Weaknesses of Large Language Models
    Shalom Lappin
    Journal of Logic, Language and Information, 2024, 33 : 9 - 20
  • [44] Pipelines for Social Bias Testing of Large Language Models
    Nozza, Debora
    Bianchi, Federico
    Hovy, Dirk
    PROCEEDINGS OF WORKSHOP ON CHALLENGES & PERSPECTIVES IN CREATING LARGE LANGUAGE MODELS (BIGSCIENCE EPISODE #5), 2022, : 68 - 74
  • [45] A Causal View of Entity Bias in (Large) Language Models
    Wang, Fei
    Mo, Wenjie
    Wang, Yiwei
    Zhou, Wenxuan
    Chen, Muhao
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 15173 - 15184
  • [46] Cultural bias and cultural alignment of large language models
    Tao, Yan
    Viberg, Olga
    Baker, Ryan S.
    Kizilcec, Rene F.
    PNAS NEXUS, 2024, 3 (09):
  • [47] Locating and Mitigating Gender Bias in Large Language Models
    Cai, Yuchen
    Cao, Ding
    Guo, Rongxi
    Wen, Yaqin
    Liu, Guiquan
    Chen, Enhong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14878 : 471 - 482
  • [48] Do Large Language Models Bias Human Evaluations?
    O'Leary, Daniel E.
    IEEE INTELLIGENT SYSTEMS, 2024, 39 (04) : 83 - 87
  • [49] From RAGs to riches: Utilizing large language models to write documents for clinical trials
    Markey, Nigel
    El-Mansouri, Ilyass
    Rensonnet, Gaetan
    van Langen, Casper
    Meier, Christoph
    CLINICAL TRIALS, 2025,
  • [50] RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials
    Marshall, Iain J.
    Kuiper, Joel
    Wallace, Byron C.
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2016, 23 (01) : 193 - 201