Deep learning can contrast the minimal pairs of syntactic data

被引:1
|
作者
Park, Kwonsik [1 ]
Park, Myung-Kwan [2 ]
Song, Sanghoun [1 ]
机构
[1] Korea Univ, Dept Linguist, 145 Anam Ro, Seoul 02841, South Korea
[2] Dongguk Univ, Dept English, 30,1 Gil, Seoul 04620, South Korea
基金
新加坡国家研究基金会;
关键词
deep learning; BERT; syntactic judgment; minimal pair; contrast;
D O I
10.17250/khisli.38.2.202106.008
中图分类号
H [语言、文字];
学科分类号
05 ;
摘要
The present work aims to assess the feasibility of using deep learning as a useful tool to investigate syntactic phenomena. To this end, the present study concerns three research questions: (i) whether deep learning can detect syntactically inappropriate constructions, (ii) whether deep learning's acceptability judgments are accountable, and (iii) whether deep learning's aspects of acceptability judgments are similar to human judgments. As a proxy for a deep learning language model, this study chooses BERT. The current paper comprises syntactically contrasted pairs of English sentences which come from the three test suites already available. The first one is 196 grammatical -ungrammatical minimal pairs from DeKeyser (2000). The second one is examples in four published syntax textbooks excerpted from Warstadt et al. (2019). The last one is extracted from Sprouse et al. (2013), which collects the examples reported in a theoretical linguistics journal, Linguistic Inquiry. The BERT models, base BERT and large BERT, are assessed by judging acceptability of items in the test suites with an evaluation metric, surprisal, which is used to measure how `surprised' a model is when encountering a word in a sequence of words, i.e., a sentence. The results are analyzed in the two frameworks: directionality and repulsion. The results of directionality reveals that the two versions of BERT are overall competent at distinguishing ungrammatical sentences from grammatical ones. The statistical results of both repulsion and directionality also reveal that the two variants of BERT do not differ significantly. Regarding repulsion, correct judgments and incorrect ones are significantly different. Additionally, the repulsion of the first test suite, which is excerpted from the items for testing learners' grammaticality judgments, is higher than the other test suites, which are excerpted from the syntax textbooks and published literature. This study compares BERT's acceptability judgments with magnitude estimation results reported in Sprouse et al. (2013) in order to examine if deep learning's syntactic knowledge is akin to human knowledge. The error analyses on incorrectly judged items reveal that there are some syntactic constructions that the two BERTs have trouble learning, which indicates that BERT's acceptability judgments are distributed not randomly.
引用
收藏
页码:395 / 424
页数:30
相关论文
共 50 条
  • [2] Learning phonemes without minimal pairs
    Maye, J
    Gerken, L
    PROCEEDINGS OF THE 24TH ANNUAL BOSTON UNIVERSITY CONFERENCE ON LANGUAGE DEVELOPMENT, VOLS 1 AND 2, 2000, : 522 - 533
  • [3] Data convergence in syntactic theory and the role of sentence pairs
    Juzek, Tom S.
    Haeussler, Jana
    ZEITSCHRIFT FUR SPRACHWISSENSCHAFT, 2020, 39 (02): : 109 - 147
  • [4] Syntactic Structure from Deep Learning
    Linzen, Tal
    Baroni, Marco
    ANNUAL REVIEW OF LINGUISTICS, VOL 7, 2021, 7 : 195 - 212
  • [5] Cross-Situational Learning of Minimal Word Pairs
    Escudero, Paola
    Mulak, Karen E.
    Vlach, Haley A.
    COGNITIVE SCIENCE, 2016, 40 (02) : 455 - 465
  • [6] A Comparative Study on Various Deep Learning Techniques for Thai NLP Lexical and Syntactic Tasks on Noisy Data
    Jettakul, Amarin
    Thamjarat, Chavisa
    Liaowongphuthorn, Kawin
    Udomcharoenchaikit, Can
    Vateekul, Peerapon
    Boonkwan, Prachya
    2018 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING (JCSSE), 2018, : 199 - 204
  • [7] Hybrid Deep Reinforcement Learning for Pairs Trading
    Kim, Sang-Ho
    Park, Deog-Yeong
    Lee, Ki-Hoon
    APPLIED SCIENCES-BASEL, 2022, 12 (03):
  • [8] The perception of voicing contrast in assimilation contexts in minimal pairs: evidence from Hungarian
    Barkanyi, Zsuzsanna
    Kiss, Zoltan G.
    ACTA LINGUISTICA ACADEMICA, 2021, 68 (1-2): : 207 - 229
  • [9] Deep Reinforcement Learning for Syntactic Error Repair in Student Programs
    Gupta, Rahul
    Kanade, Aditya
    Shevade, Shirish
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 930 - 937
  • [10] Perceptual learning in contrast discrimination and the (minimal) role of context
    Yu, C
    Klein, SA
    Levi, DM
    JOURNAL OF VISION, 2004, 4 (03): : 169 - 182