A comparison of word embeddings for the biomedical natural language processing

被引:244
|
作者
Wang, Yanshan [1 ]
Liu, Sijia [1 ]
Afzal, Naveed [1 ]
Rastegar-Mojarad, Majid [1 ]
Wang, Liwei [1 ]
Shen, Feichen [1 ]
Kingsbury, Paul [1 ]
Liu, Hongfang [1 ]
机构
[1] Mayo Clin, Dept Hlth Sci Res, Rochester, MN 55905 USA
关键词
Word embeddings; Natural language processing; Information extraction; Information retrieval; Machine learning; RELATEDNESS;
D O I
10.1016/j.jbi.2018.09.008
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background: Word embeddings have been prevalently used in biomedical Natural Language Processing (NLP) applications due to the ability of the vector representations being able to capture useful semantic properties and linguistic relationships between words. Different textual resources (e.g., Wikipedia and biomedical literature corpus) have been utilized in biomedical NLP to train word embeddings and these word embeddings have been commonly leveraged as feature input to downstream machine learning models. However, there has been little work on evaluating the word embeddings trained from different textual resources. Methods: In this study, we empirically evaluated word embeddings trained from four different corpora, namely clinical notes, biomedical publications, Wikipedia, and news. For the former two resources, we trained word embeddings using unstructured electronic health record (EHR) data available at Mayo Clinic and articles (MedLit) from PubMed Central, respectively. For the latter two resources, we used publicly available pre-trained word embeddings, GloVe and Google News. The evaluation was done qualitatively and quantitatively. For the qualitative evaluation, we randomly selected medical terms from three categories (i.e., disorder, symptom, and drug), and manually inspected the five most similar words computed by embeddings for each term. We also analyzed the word embeddings through a 2-dimensional visualization plot of 377 medical terms. For the quantitative evaluation, we conducted both intrinsic and extrinsic evaluation. For the intrinsic evaluation, we evaluated the word embeddings' ability to capture medical semantics by measruing the semantic similarity between medical terms using four published datasets: Pedersen's dataset, Hliaoutakis's dataset, MayoSRS, and UMNSRS. For the extrinsic evaluation, we applied word embeddings to multiple downstream biomedical NLP applications, including clinical information extraction (IE), biomedical information retrieval (IR), and relation extraction (RE)-, with data from-shared tasks. Results: The qualitative evaluation shows that the word embeddings trained from EHR and MedLit can find more similar medical terms than those trained from GloVe and Google News. The intrinsic quantitative evaluation verifies that the semantic similarity captured by the word embeddings trained from EHR is closer to human experts' judgments on all four tested datasets. The extrinsic quantitative evaluation shows that the word embeddings trained on EHR achieved the best F1 score of 0.900 for the clinical IE task; no word embeddings improved the performance for the biomedical IR task; and the word embeddings trained on Google News had the best overall F1 score of 0.790 for the RE task. Conclusion: Based on the evaluation results, we can draw the following conclusions. First, the word embeddings trained from EHR and MedLit can capture the semantics of medical terms better, and find semantically relevant medical terms closer to human experts' judgments than those trained from GloVe and Google News. Second, there does not exist a consistent global ranking of word embeddings for all downstream biomedical NLP applications. However, adding word embeddings as extra features will improve results on most downstream tasks. Finally, the word embeddings trained from the biomedical domain corpora do not necessarily have better performance than those trained from the general domain corpora for any downstream biomedical NLP task.
引用
收藏
页码:12 / 20
页数:9
相关论文
共 50 条
  • [1] Word embeddings for biomedical natural language processing: A survey
    Chiu, Billy
    Baker, Simon
    [J]. LANGUAGE AND LINGUISTICS COMPASS, 2020, 14 (12):
  • [2] Dissecting word embeddings and language models in natural language processing
    Verma, Vivek Kumar
    Pandey, Mrigank
    Jain, Tarun
    Tiwari, Pradeep Kumar
    [J]. JOURNAL OF DISCRETE MATHEMATICAL SCIENCES & CRYPTOGRAPHY, 2021, 24 (05): : 1509 - 1515
  • [3] Word Embeddings for Latvian Natural Language Processing Tools
    Znotins, Arturs
    [J]. HUMAN LANGUAGE TECHNOLOGIES - THE BALTIC PERSPECTIVE, 2016, 289 : 167 - 173
  • [4] Domain specific word embeddings for natural language processing in radiology
    Chen, Timothy L.
    Emerling, Max
    Chaudhari, Gunvant R.
    Chillakuru, Yeshwant R.
    Seo, Youngho
    Vu, Thienkhai H.
    Sohn, Jae Ho
    [J]. JOURNAL OF BIOMEDICAL INFORMATICS, 2021, 113
  • [5] Computationally Efficient Learning of Quality Controlled Word Embeddings for Natural Language Processing
    Alawad, Mohammed
    Tourassi, Georgia
    [J]. 2019 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2019), 2019, : 134 - 139
  • [6] Biomedical Natural Language Processing
    Hamon, Thierry
    [J]. TRAITEMENT AUTOMATIQUE DES LANGUES, 2013, 54 (03): : 77 - 79
  • [7] Biomedical Natural Language Processing
    Kim, Jin-Dong
    [J]. COMPUTATIONAL LINGUISTICS, 2017, 43 (01) : 265 - 267
  • [8] Word Embeddings for Code-Mixed Language Processing
    Pratapa, Adithya
    Choudhury, Monojit
    Sitaram, Sunayana
    [J]. 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 3067 - 3072
  • [9] Biomedical Word Sense Disambiguation with Word Embeddings
    Antunes, Rui
    Matos, Sergio
    [J]. 11TH INTERNATIONAL CONFERENCE ON PRACTICAL APPLICATIONS OF COMPUTATIONAL BIOLOGY & BIOINFORMATICS, 2017, 616 : 273 - 279
  • [10] Continuous-Space Language Processing: Beyond Word Embeddings
    Ostendorf, Mari
    [J]. STATISTICAL LANGUAGE AND SPEECH PROCESSING, SLSP 2016, 2016, 9918 : 3 - 15