Reranking and Self-Training for Parser Adaptation

被引:0
|
作者
McClosky, David [1 ]
Charniak, Eugene [1 ]
Johnson, Mark [1 ]
机构
[1] Brown Univ, BLLIP, Providence, RI 02912 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Statistical parsers trained and tested on the Penn Wall Street Journal (WSJ) treebank have shown vast improvements over the last 10 years. Much of this improvement, however, is based upon an ever-increasing number of features to be trained on (typically) the WSJ treebank data. This has led to concern that such parsers may be too finely tuned to this corpus at the expense of portability to other genres. Such worries have merit. The standard "Charniak parser" checks in at a labeled precision-recall f-measure of 89.7% on the Penn WSJ test set, but only 82.9% on the test set from the Brown treebank corpus. This paper should allay these fears. In particular, we show that the reranking parser described in Charniak and Johnson (2005) improves performance of the parser on Brown to 85.2%. Furthermore, use of the self-training techniques described in (MeClosky et al., 2006) raise this to 87.8% (an error reduction of 28%) again without any use of labeled Brown data. This is remarkable since training the parser and reranker on labeled Brown data achieves only 88.4%.
引用
收藏
页码:337 / 344
页数:8
相关论文
共 50 条
  • [22] Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation
    Marsden, Robert A.
    Bartler, Alexander
    Doebler, Mario
    Yang, Bin
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [23] Energy-based Self-Training and Normalization for Unsupervised Domain Adaptation
    Herath, Samitha
    Fernando, Basura
    Abbasnejad, Ehsan
    Hayat, Munawar
    Khadivi, Shahram
    Harandi, Mehrtash
    Rezatofighi, Hamid
    Haffari, Gholamreza
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11619 - 11628
  • [24] Machine Reading Comprehension Framework Based on Self-Training for Domain Adaptation
    Lee, Hyeon-Gu
    Jang, Youngjin
    Kim, Harksoo
    [J]. IEEE ACCESS, 2021, 9 : 21279 - 21285
  • [25] Automatic adaptation of object detectors to new domains using self-training
    RoyChowdhury, Aruni
    Chakrabarty, Prithvijit
    Singh, Ashish
    Jin, SouYoung
    Jiang, Huaizu
    Cao, Liangliang
    Learned-Miller, Erik
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 780 - 790
  • [26] Self-Training of Handwritten Word Recognition for Synthetic-to-Real Adaptation
    Wolf, Fabian
    Fink, Gernot A.
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 3885 - 3892
  • [27] Unsupervised Adaptation of Question Answering Systems via Generative Self-training
    Rennie, Steven J.
    Marcheret, Etienne
    Mallinar, Neil
    Nahamoo, David
    Goel, Vaibhava
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 1148 - 1157
  • [28] Test-time adaptation via self-training with future information
    Wen, Xin
    Shen, Hao
    Zhao, Zhongqiu
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (03)
  • [29] Unsupervised Domain Adaptation with Multiple Domain Discriminators and Adaptive Self-Training
    Spadotto, Teo
    Toldo, Marco
    Michieli, Umberto
    Zanuttigh, Pietro
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2845 - 2852
  • [30] Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation
    Niemeijer, Joshua
    Schaefer, Joerg P.
    [J]. 2021 IEEE INTELLIGENT VEHICLES SYMPOSIUM WORKSHOPS (IV WORKSHOPS), 2021, : 364 - 371