Named Entity Recognition with Word Embeddings and Wikipedia Categories for a Low-Resource Language

被引:35
|
作者
Das, Arjun [1 ]
Ganguly, Debasis [2 ]
Garain, Utpal [3 ]
机构
[1] Univ Calcutta, Dept Comp Sci & Engn, JD 2,Sect 3, Kolkata 700106, India
[2] Dublin City Univ, ADAPT Ctr, Sch Comp, Dublin, Ireland
[3] Indian Stat Inst, Comp Vis & Pattern Recognit Unit, 203 BT Rd, Kolkata 700108, India
基金
爱尔兰科学基金会;
关键词
Design; Algorithms; Performance; Word embedding; CRF-based NER; Wikipedia-based NER; unsupervised NER; language-independent NER; classifier;
D O I
10.1145/3015467
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, we propose a word embedding-based named entity recognition (NER) approach. NER is commonly approached as a sequence labeling task with the application of methods such as conditional random field (CRF). However, for low-resource languages without the presence of sufficiently large training data, methods such as CRF do not perform well. In our work, we make use of the proximity of the vector embeddings ofwords to approach the NER problem. The hypothesis is that word vectors belonging to the same name category, such as a person's name, occur in close vicinity in the abstract vector space of the embedded words. Assuming that this clustering hypothesis is true, we apply a standard classification approach on the vectors of words to learn a decision boundary between the NER classes. Our NER experiments are conducted on a morphologically rich and low-resource language, namely Bengali. Our approach significantly outperforms standard baseline CRF approaches that use cluster labels of word embeddings and gazetteers constructed from Wikipedia. Further, we propose an unsupervised approach (that uses an automatically created named entity (NE) gazetteer from Wikipedia in the absence of training data). For a low-resource language, the word vectors obtained from Wikipedia are not sufficient to train a classifier. As a result, we propose to make use of the distance measure between the vector embeddings of words to expand the set of Wikipedia training examples with additional NEs extracted from a monolingual corpus that yield significant improvement in the unsupervised NER performance. In fact, our expansion method performs better than the traditional CRF-based (supervised) approach (i.e., F-score of 65.4% vs. 64.2%). Finally, we compare our proposed approach to the official submission for the IJCNLP-2008 Bengali NER shared task and achieve an overall improvement of F-score 11.26% with respect to the best official system.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] Supervised Bilingual Word Embeddings for Low-Resource Language Pairs: Myanmar and Thai
    16TH INTERNATIONAL JOINT SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND NATURAL LANGUAGE PROCESSING (ISAI-NLP 2021), 2021,
  • [22] A deep neural framework for named entity recognition with boosted word embeddings
    Goyal, Archana
    Gupta, Vishal
    Kumar, Manish
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (06) : 15533 - 15546
  • [23] LearningToAdapt with word embeddings: Domain adaptation of Named Entity Recognition systems
    Nozza, Debora
    Manchanda, Pikakshi
    Fersini, Elisabetta
    Palmonari, Matteo
    Messina, Enza
    INFORMATION PROCESSING & MANAGEMENT, 2021, 58 (03)
  • [24] Deep learning with word embeddings improves biomedical named entity recognition
    Habibi, Maryam
    Weber, Leon
    Neves, Mariana
    Wiegandt, David Luis
    Leser, Ulf
    BIOINFORMATICS, 2017, 33 (14) : I37 - I48
  • [25] LearningToAdapt with word embeddings: Domain adaptation of Named Entity Recognition systems
    Nozza, Debora
    Manchanda, Pikakshi
    Fersini, Elisabetta
    Palmonari, Matteo
    Messina, Enza
    Information Processing and Management, 2021, 58 (03):
  • [26] Improving Chemical Named Entity Recognition in Patents with Contextualized Word Embeddings
    Zhai, Zenan
    Dat Quoc Nguyen
    Akhondi, Saber A.
    Thorne, Camilo
    Druckenbrodt, Christian
    Cohn, Trevor
    Gregory, Michelle
    Verspoor, Karin
    SIGBIOMED WORKSHOP ON BIOMEDICAL NATURAL LANGUAGE PROCESSING (BIONLP 2019), 2019, : 328 - 338
  • [27] Comparing general and specialized word embeddings for biomedical named entity recognition
    Ramos-Vargas, Rigo E.
    Roman-Godinez, Israel
    Torres-Ramos, Sulema
    PEERJ COMPUTER SCIENCE, 2021, 7 : 1 - 22
  • [28] A deep neural framework for named entity recognition with boosted word embeddings
    Archana Goyal
    Vishal Gupta
    Manish Kumar
    Multimedia Tools and Applications, 2024, 83 : 15533 - 15546
  • [29] Combining rule-based and statistical mechanisms for low-resource named entity recognition
    Gabbard, Ryan
    DeYoung, Jay
    Lignos, Constantine
    Freedman, Marjorie
    Weischedel, Ralph
    MACHINE TRANSLATION, 2018, 32 (1-2) : 31 - 43
  • [30] A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition
    Chen, Yuxuan
    Mikkelsen, Jonas
    Binder, Arne
    Alt, Christoph
    Hennig, Leonhard
    PROCEEDINGS OF THE 7TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP, 2022, : 46 - 59