Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers

被引:1
|
作者
Roussinov, Dmitri [1 ]
Conkie, Andrew [2 ]
Patterson, Andrew [1 ]
Sainsbury, Christopher [3 ]
机构
[1] Univ Strathclyde, Dept Comp & Informat Sci, Glasgow, Scotland
[2] Red Star Consulting, Glasgow, Scotland
[3] NHS Greater Glasgow & Clyde, Glasgow, Scotland
来源
基金
英国工程与自然科学研究理事会;
关键词
discharge summaries; BERT; clinical event prediction; pre-trained language models; transformers; deep learning; RISK;
D O I
10.3389/fdgth.2021.810260
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Identifying which patients are at higher risks of dying or being re-admitted often happens to be resource- and life- saving, thus is a very important and challenging task for healthcare text analytics. While many successful approaches exist to predict such clinical events based on categorical and numerical variables, a large amount of health records exists in the format of raw text such as clinical notes or discharge summaries. However, the text-analytics models applied to free-form natural language found in those notes are lagging behind the break-throughs happening in the other domains and remain to be primarily based on older bag-of-words technologies. As a result, they rarely reach the accuracy level acceptable for the clinicians. In spite of their success in other domains, the superiority of deep neural approaches over classical bags of words for this task has not yet been convincingly demonstrated. Also, while some successful experiments have been reported, the most recent break-throughs due to the pre-trained language models have not yet made their ways into the medical domain. Using a publicly available healthcare dataset, we have explored several classification models to predict patients' re-admission or a fatality based on their discharge summaries and established that 1) The performance of the neural models used in our experiments convincingly exceeds those based on bag-of-words by several percentage points as measured by the standard metrics. 2) This allows us to achieve the accuracy typically acceptable by the clinicians as of practical use (area under the ROC curve above 0.70) for the majority of our prediction targets. 3) While the pre-trained attention-based transformer performed only on par with the model that averages word embeddings when applied to full length discharge summaries, the transformer still handles shorter text segments substantially better, at times with the margin of 0.04 in the area under the ROC curve. Thus, our findings extend the success of pre-trained language models reported in other domains to the task of clinical event prediction, and likely to other text-classification tasks in the healthcare analytics domain. 4) We suggest several models to overcome the transformers' major drawback (their input size limitation), and confirm that this is crucial to achieve their top performance. Our modifications are domain agnostic, and thus can be applied in other applications where the text inputs exceed 200 words. 5) We have successfully demonstrated how non-text attributes (such as patient age, demographics, type of admission etc.) can be combined with text to gain additional improvements for several prediction targets. We include extensive ablation studies showing the impact of the training size, and highlighting the tradeoffs between the performance and the resources needed.
引用
下载
收藏
页数:11
相关论文
共 50 条
  • [1] Network-Based Bag-of-Words Model for Text Classification
    Yan, Dongyang
    Li, Keping
    Gu, Shuang
    Yang, Liu
    IEEE ACCESS, 2020, 8 : 82641 - 82652
  • [2] Visual Attention based Bag-of-Words Model for Image Classification
    Wang, Qiwei
    Wan, Shouhong
    Yue, Lihua
    Wang, Che
    6TH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2014), 2014, 9159
  • [3] From Bag-of-Words to Transformers: A Comparative Study for Text Classification in Healthcare Discussions in Social Media
    De Santis, Enrico
    Martino, Alessio
    Ronci, Francesca
    Rizzi, Antonello
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024,
  • [4] Embedding generation for text classification of Brazilian Portuguese user reviews: from bag-of-words to transformers
    Souza, Frederico Dias
    Filho, Joao Baptista de Oliveira e Souza
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (13): : 9393 - 9406
  • [5] Embedding generation for text classification of Brazilian Portuguese user reviews: from bag-of-words to transformers
    Frederico Dias Souza
    João Baptista de Oliveira e Souza Filho
    Neural Computing and Applications, 2023, 35 : 9393 - 9406
  • [6] Visual Cognitive Attention based Bag-of-words Image Representation for Object Discovery
    Ma, Zhong
    Wang, Zhuping
    PROCEEDINGS OF 2018 IEEE 17TH INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS & COGNITIVE COMPUTING (ICCI*CC 2018), 2018, : 234 - 239
  • [7] A Deep Learning Model Based on Neural Bag-of-Words Attention for Sentiment Analysis
    Liao, Jing
    Yi, Zhixiang
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, 2021, 12815 : 467 - 478
  • [8] Graph-based bag-of-words for classification
    Silva, Fernanda B.
    Werneck, Rafael de O.
    Goldenstein, Siome
    Tabbone, Salvatore
    Torres, Ricardo da S.
    PATTERN RECOGNITION, 2018, 74 : 266 - 285
  • [9] Vehicle Logo Recognition Based on Bag-of-Words
    Yu, Shuyuan
    Zheng, Shibao
    Yang, Hua
    Liang, Longfei
    2013 10TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS 2013), 2013, : 353 - 358
  • [10] Score-based likelihood ratios for linguistic text evidence with a bag-of-words model
    Ishihara, Shunichi
    FORENSIC SCIENCE INTERNATIONAL, 2021, 327