A generic LSTM neural network architecture to infer heterogeneous model transformations

被引:13
|
作者
Burgueno, Loli [1 ,2 ]
Cabot, Jordi [1 ,3 ]
Li, Shuai [2 ]
Gerard, Sebastien [2 ]
机构
[1] Open Univ Catalonia, IN3, Barcelona, Spain
[2] Univ Paris Saclay, CEA, Inst LIST, Gif Sur Yvette, France
[3] ICREA, Barcelona, Spain
来源
SOFTWARE AND SYSTEMS MODELING | 2022年 / 21卷 / 01期
关键词
Model manipulation; Code generation; Model transformation; Artificial intelligence; Machine learning; Neural networks;
D O I
10.1007/s10270-021-00893-y
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Models capture relevant properties of systems. During the models' life-cycle, they are subjected to manipulations with different goals such as managing software evolution, performing analysis, increasing developers' productivity, and reducing human errors. Typically, these manipulation operations are implemented as model transformations. Examples of these transformations are (i) model-to-model transformations for model evolution, model refactoring, model merging, model migration, model refinement, etc., (ii) model-to-text transformations for code generation and (iii) text-to-model ones for reverse engineering. These operations are usually manually implemented, using general-purpose languages such as Java, or domain-specific languages (DSLs) such as ATL or Acceleo. Even when using such DSLs, transformations are still time-consuming and error-prone. We propose using the advances in artificial intelligence techniques to learn these manipulation operations on models and automate the process, freeing the developer from building specific pieces of code. In particular, our proposal is a generic neural network architecture suitable for heterogeneous model transformations. Our architecture comprises an encoder-decoder long short-term memory with an attention mechanism. It is fed with pairs of input-output examples and, once trained, given an input, automatically produces the expected output. We present the architecture and illustrate the feasibility and potential of our approach through its application in two main operations on models: model-to-model transformations and code generation. The results confirm that neural networks are able to faithfully learn how to perform these tasks as long as enough data are provided and no contradictory examples are given.
引用
收藏
页码:139 / 156
页数:18
相关论文
共 50 条
  • [1] A generic LSTM neural network architecture to infer heterogeneous model transformations
    Loli Burgueño
    Jordi Cabot
    Shuai Li
    Sébastien Gérard
    [J]. Software and Systems Modeling, 2022, 21 : 139 - 156
  • [2] An LSTM-Based Neural Network Architecture for Model Transformations
    Burgueno, Loli
    Cabot, Jordi
    Gerard, Sebastien
    [J]. 2019 ACM/IEEE 22ND INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS (MODELS 2019), 2019, : 294 - 299
  • [3] A generic reconfigurable neural network architecture implemented as a network on chip
    Theocharides, T
    Link, G
    Vijaykrishnan, N
    Irwin, MJ
    Srikantam, V
    [J]. IEEE INTERNATIONAL SOC CONFERENCE, PROCEEDINGS, 2004, : 191 - 194
  • [4] NetPU: Prototyping a Generic Reconfigurable Neural Network Accelerator Architecture
    Liu, Yuhao
    Rai, Shubham
    Ullah, Salim
    Kumar, Akash
    [J]. 2022 21ST INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (ICFPT 2022), 2022, : 323 - 323
  • [5] Generic neural network model and simulation toolkit
    del Valle, MG
    Garcia-Orellana, C
    Lopez-Aligue, FJ
    Acevedo-Sotoca, I
    [J]. BIOLOGICAL AND ARTIFICIAL COMPUTATION: FROM NEUROSCIENCE TO TECHNOLOGY, 1997, 1240 : 313 - 322
  • [6] Automated Essay Scoring: A Siamese Bidirectional LSTM Neural Network Architecture
    Liang, Guoxi
    On, Byung-Won
    Jeong, Dongwon
    Kim, Hyun-Chul
    Choi, Gyu Sang
    [J]. SYMMETRY-BASEL, 2018, 10 (12):
  • [7] A NEURAL NETWORK MODEL OF TRANSFORMATIONS IN THE SOMATOSENSORY SYSTEM
    BANKMAN, IN
    JOHNSON, KO
    HSIAO, SS
    [J]. IMAGES OF THE TWENTY-FIRST CENTURY, PTS 1-6, 1989, 11 : 2060 - 2061
  • [8] A Graph Neural Network Accelerator Optimization Research on Heterogeneous Architecture
    Wu J.
    Zhao B.
    Wen H.
    Wang Y.
    [J]. Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2023, 46 (03): : 19 - 24
  • [9] Synthesis of Neural Network-Based Approximators with Heterogeneous Architecture
    M. I. Markin
    [J]. Programming and Computer Software, 2003, 29 : 219 - 227
  • [10] Synthesis of neural network-based approximators with heterogeneous architecture
    Markin, MI
    [J]. PROGRAMMING AND COMPUTER SOFTWARE, 2003, 29 (04) : 219 - 227