Learning multi-prototype word embedding from single-prototype word embedding with integrated knowledge

被引:9
|
作者
Yang, Xuefeng [1 ]
Mao, Kezhi [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Nanyang Ave, Singapore 639798, Singapore
关键词
Multi-prototype word embedding; Distributional semantic model; Fine tuning; Semantic similarity; SEMANTIC SIMILARITY;
D O I
10.1016/j.eswa.2016.03.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Distributional semantic models (DSM) or word embeddings are widely used in prediction of semantic similarity and relatedness. However, context aware similarity and relatedness prediction is still a challenging issue because most DSM models or word embeddings use one vector per word without considering polysemy and homonym. In this paper, we propose a supervised fine tuning framework to transform the existing single-prototype word embeddings into multi-prototype word embeddings based on lexical semantic resources. As a post-processing step, the proposed framework is compatible with any sense inventory and any word embedding. To test the proposed learning framework, both intrinsic and extrinsic evaluations are conducted. Experiments results of 3 tasks with 8 datasets show that the multi-prototype word representations learned by the proposed framework outperform single-prototype word representations. (C) 2016 Elsevier Ltd. All rights reserved.
引用
收藏
页码:291 / 299
页数:9
相关论文
共 50 条
  • [21] A new word embedding model integrated with medical knowledge for deep learning-based sentiment classification
    Khine, Aye Hninn
    Wettayaprasit, Wiphada
    Duangsuwan, Jarunee
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 148
  • [22] Learning distributed word representation with multi-contextual mixed embedding
    Li, Jianqiang
    Li, Jing
    Fu, Xianghua
    Masud, M. A.
    Huang, Joshua Zhexue
    KNOWLEDGE-BASED SYSTEMS, 2016, 106 : 220 - 230
  • [23] Hierarchical Multi-Task Word Embedding Learning for Synonym Prediction
    Fei, Hongliang
    Tan, Shulong
    Li, Ping
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 834 - 842
  • [24] Chinese Word Embedding Learning with Limited Data
    Chen, Shurui
    Chen, Yufu
    Lu, Yuyin
    Rao, Yanghui
    Xie, Haoran
    Li, Qing
    WEB AND BIG DATA, APWEB-WAIM 2021, PT I, 2021, 12858 : 211 - 226
  • [25] A Hybrid Learning Scheme for Chinese Word Embedding
    Chen, Wenfan
    Sheng, Weiguo
    REPRESENTATION LEARNING FOR NLP, 2018, : 84 - 90
  • [26] Sidecar: Augmenting Word Embedding Models with Expert Knowledge
    Lemay, Mathieu
    Shapiro, Daniel
    MacPherson, Mary Kate
    Yee, Kieran
    Qassoud, Hamza
    Bolic, Miodrag
    ADVANCES IN INFORMATION AND COMMUNICATION, VOL 2, 2020, 1130 : 525 - 540
  • [27] Learning from open-set noisy labels based on multi-prototype modeling
    Zhang, Yue
    Chen, Yiyi
    Fang, Chaowei
    Wang, Qian
    Wu, Jiayi
    Xin, Jingmin
    PATTERN RECOGNITION, 2025, 157
  • [28] Single Training Dimension Selection for Word Embedding with PCA
    Wang, Yu
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 3597 - 3602
  • [29] An Enhanced Neural Word Embedding Model for Transfer Learning
    Kowsher, Md
    Sobuj, Md Shohanur Islam
    Shahriar, Md Fahim
    Prottasha, Nusrat Jahan
    Arefin, Mohammad Shamsul
    Dhar, Pranab Kumar
    Koshiba, Takeshi
    APPLIED SCIENCES-BASEL, 2022, 12 (06):
  • [30] Distilled Wasserstein Learning for Word Embedding and Topic Modeling
    Xu, Hongteng
    Wang, Wenlin
    Liu, Wei
    Carin, Lawrence
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31