共 50 条
Large language models to identify social determinants of health in electronic health records
被引:52
|作者:
Guevara, Marco
[1
,2
]
Chen, Shan
[1
,2
]
Thomas, Spencer
[1
,2
,3
]
Chaunzwa, Tafadzwa L.
[1
,2
]
Franco, Idalid
[2
]
Kann, Benjamin H.
[1
,2
]
Moningi, Shalini
[2
]
Qian, Jack M.
[1
,2
]
Goldstein, Madeleine
[4
]
Harper, Susan
[4
]
Aerts, Hugo J. W. L.
[1
,2
,5
,6
]
Catalano, Paul J.
[7
,8
]
Savova, Guergana K.
[3
]
Mak, Raymond H.
[1
,2
]
Bitterman, Danielle S.
[1
,2
]
机构:
[1] Harvard Med Sch, Artificial Intelligence Med AIM Program, Mass Gen Brigham, Boston, MA 02115 USA
[2] Brigham & Womens Hosp, Dana Farber Canc Inst, Dept Radiat Oncol, Boston, MA 02115 USA
[3] Harvard Med Sch, Boston Childrens Hosp, Computat Hlth Informat Program, Boston, MA USA
[4] Dana Farber Canc Inst, Adult Resource Off, Boston, MA USA
[5] Maastricht Univ, Radiol & Nucl Med, GROW, Maastricht, Netherlands
[6] Maastricht Univ, CARIM, Maastricht, Netherlands
[7] Dana Farber Canc Inst, Dept Data Sci, Boston, MA USA
[8] Harvard TH Chan Sch Publ Hlth, Dept Biostat, Boston, MA USA
基金:
欧洲研究理事会;
关键词:
ADVERSE CHILDHOOD EXPERIENCES;
UNITED-STATES;
SUPPORT;
MORTALITY;
SURVIVAL;
WOMEN;
D O I:
10.1038/s41746-023-00970-0
中图分类号:
R19 [保健组织与事业(卫生事业管理)];
学科分类号:
摘要:
Social determinants of health (SDoH) play a critical role in patient outcomes, yet their documentation is often missing or incomplete in the structured data of electronic health records (EHRs). Large language models (LLMs) could enable high-throughput extraction of SDoH from the EHR to support research and clinical care. However, class imbalance and data limitations present challenges for this sparsely documented yet critical information. Here, we investigated the optimal methods for using LLMs to extract six SDoH categories from narrative text in the EHR: employment, housing, transportation, parental status, relationship, and social support. The best-performing models were fine-tuned Flan-T5 XL for any SDoH mentions (macro-F1 0.71), and Flan-T5 XXL for adverse SDoH mentions (macro-F1 0.70). Adding LLM-generated synthetic data to training varied across models and architecture, but improved the performance of smaller Flan-T5 models (delta F1 + 0.12 to +0.23). Our best-fine-tuned models outperformed zero- and few-shot performance of ChatGPT-family models in the zero- and few-shot setting, except GPT4 with 10-shot prompting for adverse SDoH. Fine-tuned models were less likely than ChatGPT to change their prediction when race/ethnicity and gender descriptors were added to the text, suggesting less algorithmic bias (p < 0.05). Our models identified 93.8% of patients with adverse SDoH, while ICD-10 codes captured 2.0%. These results demonstrate the potential of LLMs in improving real-world evidence on SDoH and assisting in identifying patients who could benefit from resource support.
引用
收藏
页数:14
相关论文