Comparison and interpretability of machine learning models to predict severity of chest injury

被引:5
|
作者
Kulshrestha, Sujay [1 ,2 ]
Dligach, Dmitriy [3 ,4 ,5 ]
Joyce, Cara [3 ,4 ]
Gonzalez, Richard [1 ,2 ]
O'Rourke, Ann P. [6 ]
Glazer, Joshua M. [7 ]
Stey, Anne [8 ]
Kruser, Jacqueline M. [9 ]
Churpek, Matthew M. [9 ]
Afshar, Majid [9 ]
机构
[1] Loyola Univ Chicago, Burn & Shock Trauma Res Inst, Maywood, IL USA
[2] Loyola Univ Med Ctr, Dept Surg, 2160 South First Ave,Bldg 110,Room 3210, Maywood, IL 60153 USA
[3] Loyola Univ Chicago, Ctr Hlth Outcomes & Informat Res, Div Hlth Sci, Maywood, IL USA
[4] Loyola Univ Chicago, Stritch Sch Med, Dept Publ Hlth Sci, Maywood, IL USA
[5] Loyola Univ Chicago, Dept Comp Sci, Chicago, IL USA
[6] Univ Wisconsin, Dept Surg, Madison, WI USA
[7] Univ Wisconsin, Dept Emergency Med, Madison, WI USA
[8] Northwestern Univ, Dept Surg, Chicago, IL USA
[9] Univ Wisconsin, Dept Med, Madison, WI USA
基金
美国国家卫生研究院;
关键词
trauma surgery; machine learning; interpretability; TEXT; INFORMATION; PROGRESS; CURVES;
D O I
10.1093/jamiaopen/ooab015
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Objective: Trauma quality improvement programs and registries improve care and outcomes for injured patients. Designated trauma centers calculate injury scores using dedicated trauma registrars; however, many injuries arrive at nontrauma centers, leaving a substantial amount of data uncaptured. We propose automated methods to identify severe chest injury using machine learning (ML) and natural language processing (NLP) methods from the electronic health record (EHR) for quality reporting. Materials and Methods: A level I trauma center was queried for patients presenting after injury between 2014 and 2018. Prediction modeling was performed to classify severe chest injury using a reference dataset labeled by certified registrars. Clinical documents from trauma encounters were processed into concept unique identifiers for inputs to ML models: logistic regression with elastic net (EN) regularization, extreme gradient boosted (XGB) machines, and convolutional neural networks (CNN). The optimal model was identified by examining predictive and face validity metrics using global explanations. Results: Of 8952 encounters, 542 (6.1%) had a severe chest injury. CNN and EN had the highest discrimination, with an area under the receiver operating characteristic curve of 0.93 and calibration slopes between 0.88 and 0.97. CNN had better performance across risk thresholds with fewer discordant cases. Examination of global explanations demonstrated the CNN model had better face validity, with top features including "contusion of lung" and "hemopneumothorax." Discussion: The CNN model featured optimal discrimination, calibration, and clinically relevant features selected. Conclusion: NLP and ML methods to populate trauma registries for quality analyses are feasible.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Comparison of traffic accident injury severity prediction models with explainable machine learning
    Cicek, Elif
    Akin, Murat
    Uysal, Furkan
    Topcu Aytas, ReyhanMerve
    [J]. TRANSPORTATION LETTERS-THE INTERNATIONAL JOURNAL OF TRANSPORTATION RESEARCH, 2023, 15 (09): : 1043 - 1054
  • [2] Utilizing Machine Learning Models to Predict the Car Crash Injury Severity among Elderly Drivers
    Al Mamlook, Rabia Emhamed
    Abdulhameed, Tiba Zaki
    Hasan, Raed
    Al-Shaikhli, Hasnaa Imad
    Mohammed, Ihab
    Tabatabai, Shadha
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ELECTRO INFORMATION TECHNOLOGY (EIT), 2020, : 105 - 111
  • [3] Comparison of Machine Learning Models to Predict Twitter Buzz
    Parikh, Yash
    Abdelfattah, Eman
    [J]. 2019 IEEE 10TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2019, : 69 - 73
  • [4] Interpretability and Explainability of Machine Learning Models: Achievements and Challenges
    Henriques, J.
    Rocha, T.
    de Carvalho, P.
    Silva, C.
    Paredes, S.
    [J]. INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2022, ICBHI 2022, 2024, 108 : 81 - 94
  • [5] Measuring Interpretability for Different Types of Machine Learning Models
    Zhou, Qing
    Liao, Fenglu
    Mou, Chao
    Wang, Ping
    [J]. TRENDS AND APPLICATIONS IN KNOWLEDGE DISCOVERY AND DATA MINING: PAKDD 2018 WORKSHOPS, 2018, 11154 : 295 - 308
  • [6] The Importance of Interpretability and Validations of Machine-Learning Models
    Yamasawa, Daisuke
    Ozawa, Hideki
    Goto, Shinichi
    [J]. CIRCULATION JOURNAL, 2024, 88 (01) : 157 - 158
  • [7] Advancing interpretability of machine-learning prediction models
    Trenary, Laurie
    DelSole, Timothy
    [J]. ENVIRONMENTAL DATA SCIENCE, 2022, 1
  • [8] Interpretability and causal discovery of the machine learning models to predict the production of CBM wells after hydraulic fracturing
    Min, Chao
    Wen, Guoquan
    Gou, Liangjie
    Li, Xiaogang
    Yang, Zhaozhong
    [J]. ENERGY, 2023, 285
  • [9] Accuracy, Fairness, and Interpretability of Machine Learning Criminal Recidivism Models
    Ingram, Eric
    Gursoy, Furkan
    Kakadiaris, Ioannis A.
    [J]. 2022 IEEE/ACM INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING, APPLICATIONS AND TECHNOLOGIES, BDCAT, 2022, : 233 - 241
  • [10] Applying Genetic Programming to Improve Interpretability in Machine Learning Models
    Ferreira, Leonardo Augusto
    Guimaraes, Frederico Gadelha
    Silva, Rodrigo
    [J]. 2020 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2020,