LEARNING LATENT VARIABLE GRAMMARS FROM COMPLEMENTARY PERSPECTIVES

被引:0
|
作者
Li, Dongchen [1 ]
Zhang, Xiantao [1 ]
Wu, Xihong [1 ]
机构
[1] Peking Univ, Speech & Hearing Res Ctr, Key Lab Machine Percept & Intelligence, Beijing 100871, Peoples R China
来源
2014 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (CHINASIP) | 2014年
关键词
Parsing; sparsity; PCFGLA;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The corpus for training a parser consists of sentences of heterogeneous grammar usages. Previous parser domain adaptation work has concentrated on adaptation to the shifts in vocabulary rather than grammar usage. In this paper, we focus on exploiting the diversity of training date separately and then accumulates their advantages. We propose an approach that grammar is biased toward relevant syntactic style, and the complementary grammar usage are combined for inference. Multiple grammars with partly complementary points of strength are induced individually. They capture complementary data representation, and we accumulates their advantages in a joint model to assemble the complementary depicting powers. Despite its compatibility with many other methods, out product model achieves 85.20% F-1 score on Penn Chinese Treebank, higher than previous systems.
引用
收藏
页码:124 / 128
页数:5
相关论文
共 50 条
  • [41] Bayesian Manifold Learning: The Locally Linear Latent Variable Model
    Park, Mijung
    Jitkrittum, Wittawat
    Qamar, Ahmad
    Szabo, Zoltan
    Buesing, Lars
    Sahani, Maneesh
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [42] Tensor Decompositions for Learning Latent Variable Models (A Survey for ALT)
    Anandkumar, Anima
    Ge, Rong
    Hsu, Daniel
    Kakade, Sham M.
    Telgarsky, Matus
    ALGORITHMIC LEARNING THEORY, ALT 2015, 2015, 9355 : 19 - 38
  • [43] A topography-preserving latent variable model with learning metrics
    Kaski, S
    Sinkkonen, J
    ADVANCES IN SELF-ORGANISING MAPS, 2001, : 224 - 229
  • [44] A COMPARISON OF DISCRETE LATENT VARIABLE MODELS FOR SPEECH REPRESENTATION LEARNING
    Zhou, Henry
    Baevski, Alexei
    Auli, Michael
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3050 - 3054
  • [45] Empirical Study of the Benefits of Overparameterization in Learning Latent Variable Models
    Buhai, Rares-Darius
    Halpern, Yoni
    Kim, Yoon
    Risteski, Andrej
    Sontag, David
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [46] FlowPrior: Learning Expressive Priors for Latent Variable Sentence Models
    Ding, Xiaoan
    Gimpel, Kevin
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3242 - 3258
  • [47] MAZE ALLEY WIDTH AS A VARIABLE IN LATENT LEARNING-BEHAVIOR
    KILMAN, B
    AMERICAN PSYCHOLOGIST, 1956, 11 (08) : 425 - 426
  • [48] Flexible latent variable models for multi-task learning
    Zhang, Jian
    Ghahramani, Zoubin
    Yang, Yiming
    MACHINE LEARNING, 2008, 73 (03) : 221 - 242
  • [49] Learning latent variable structured prediction models with Gaussian perturbations
    Bello, Kevin
    Honorio, Jean
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [50] Flexible latent variable models for multi-task learning
    Jian Zhang
    Zoubin Ghahramani
    Yiming Yang
    Machine Learning, 2008, 73 : 221 - 242