Probing Linguistic Systematicity

被引:0
|
作者
Goodwin, Emily [1 ,5 ]
Sinha, Koustuv [2 ,3 ,5 ]
O'Donnell, Timothy J. [1 ,4 ,5 ]
机构
[1] McGill Univ, Dept Linguist, Montreal, PQ, Canada
[2] McGill Univ, Sch Comp Sci, Montreal, PQ, Canada
[3] Facebook AI Res FAIR, Montreal, PQ, Canada
[4] Mila, Canada CIFAR AI Chair, Montreal, PQ, Canada
[5] Quebec Artificial Intelligence Inst Mila, Montreal, PQ, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, there has been much interest in the question of whether deep natural language understanding models exhibit systematicity-generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear. There is accumulating evidence that neural models often generalize non-systematically. We examined the notion of systematicity from a linguistic perspective, defining a set of probes and a set of metrics to measure systematic behaviour. We also identified ways in which network architectures can generalize non-systematically, and discuss why such forms of generalization may be unsatisfying. As a case study, we performed a series of experiments in the setting of natural language inference (NLI), demonstrating that some NLU systems achieve high overall performance despite being non-systematic.
引用
收藏
页码:1958 / 1969
页数:12
相关论文
共 50 条