On the Learnability of Programming Language Semantics

被引:0
|
作者
Ghica, Dan R. [1 ]
Alyahya, Khulood [2 ,3 ]
机构
[1] Univ Birmingham, Birmingham, W Midlands, England
[2] Univ Exeter, Exeter, Devon, England
[3] King Saud Univ, Riyadh, Saudi Arabia
基金
英国工程与自然科学研究理事会;
关键词
RECURRENT NEURAL-NETWORKS; 3RD-ORDER IDEALIZED ALGOL; NOVELTY DETECTION; FULL ABSTRACTION; GAME SEMANTICS;
D O I
10.4204/EPTCS.261.7
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Game semantics is a powerful method of semantic analysis for programming languages. It gives mathematically accurate models ("fully abstract") for a wide variety of programming languages. Game semantic models are combinatorial characterisations of all possible interactions between a term and its syntactic context. Because such interactions can be concretely represented as sets of sequences, it is possible to ask whether they can be learned from examples. Concretely, we are using long short-term memory neural nets (LSTM), a technique which proved effective in learning natural languages for automatic translation and text synthesis, to learn game-semantic models of sequential and concurrent versions of Idealised Algol (IA), which are algorithmically complex yet can be concisely described. We will measure how accurate the learned models are as a function of the degree of the term and the number of free variables involved. Finally, we will show how to use the learned model to perform latent semantic analysis between concurrent and sequential Idealised Algol.
引用
收藏
页码:57 / 75
页数:19
相关论文
共 50 条