Understanding the effects of negative (and positive) pointwise mutual information on word vectors

被引:2
|
作者
Salle, Alexandre [1 ]
Villavicencio, Aline [1 ,2 ]
机构
[1] Univ Fed Rio Grande do Sul, Inst Informat, Porto Alegre, RS, Brazil
[2] Univ Sheffield, Dept Comp Sci, Sheffield, S Yorkshire, England
基金
英国工程与自然科学研究理事会;
关键词
Word embedding; lexical semantics; pointwise mutual information; MATRIX; MODELS;
D O I
10.1080/0952813X.2022.2072004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the recent popularity of contextual word embeddings, static word embeddings still dominate lexical semantic tasks, making their study of continued relevance. A widely adopted family of such static word embeddings is derived by explicitly factorising the Pointwise Mutual Information (PMI) weighting of the co-occurrence matrix. As unobserved co-occurrences lead PMI to negative infinity, a common workaround is to clip negative PMI at 0. However, it is unclear what information is lost by collapsing negative PMI values to 0. To answer this question, we isolate and study the effects of negative (and positive) PMI on the semantics and geometry of models adopting factorisation of different PMI matrices. Word and sentence-level evaluations show that only accounting for positive PMI in the factorisation strongly captures both semantics and syntax, whereas using only negative PMI captures little of semantics but a surprising amount of syntactic information. Results also reveal that incorporating negative PMI induces stronger rank invariance of vector norms and directions, as well as improved rare word representations.
引用
收藏
页码:1161 / 1199
页数:39
相关论文
共 50 条
  • [1] On Suspicious Coincidences and Pointwise Mutual Information
    Williams, Christopher K. I.
    [J]. Neural Computation, 2022, 34 : 1 - 10
  • [2] On Suspicious Coincidences and Pointwise Mutual Information
    Williams, Christopher K. I.
    [J]. NEURAL COMPUTATION, 2022, 34 (10) : 2037 - 2046
  • [3] Density estimation based on pointwise mutual information
    Inoue, Akimitsu
    [J]. ECONOMICS BULLETIN, 2016, 36 (02): : 1138 - +
  • [4] Understanding the buzz that matters: negative vs positive word of mouth
    Samson, Alain
    [J]. INTERNATIONAL JOURNAL OF MARKET RESEARCH, 2006, 48 (06) : 647 - 657
  • [5] Automatic Stop Word Generation for Mining Software Artifact Using Topic Model with Pointwise Mutual Information
    Lee, Jung-Been
    Lee, Taek
    In, Hoh Peter
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2019, E102D (09): : 1761 - 1772
  • [6] Deep convolutional adversarial graph autoencoder using positive pointwise mutual information for graph embedding
    马秀慧
    WANG Rong
    CHEN Shudong
    DU Rong
    ZHU Danyang
    ZHAO Hua
    [J]. High Technology Letters, 2022, (01) : 98 - 106
  • [7] An Experimental Investigation of the Positive and Negative Effects of Mutual Observation
    Bloomfield, Robert
    Hales, Jeffrey
    [J]. ACCOUNTING REVIEW, 2009, 84 (02): : 331 - 354
  • [8] Deep convolutional adversarial graph autoencoder using positive pointwise mutual information for graph embedding
    Ma X.
    Wang R.
    Chen S.
    Du R.
    Zhu D.
    Zhao H.
    [J]. High Technology Letters, 2022, 28 (01) : 98 - 106
  • [9] Topic Optimization Method Based on Pointwise Mutual Information
    Ding, Yuxin
    Yan, Shengli
    [J]. NEURAL INFORMATION PROCESSING, PT III, 2015, 9491 : 148 - 155
  • [10] Effects of Negative and Positive Evidence on Adult Word Learning
    Strapp, Chehalis M.
    Helmick, Augusta L.
    Tonkovich, Hayley M.
    Bleakney, Dana M.
    [J]. LANGUAGE LEARNING, 2011, 61 (02) : 506 - 532