Comparing Semantic Graph Representations of Source Code: The Case of Automatic Feedback on Programming Assignments

被引:1
|
作者
Paiva, Jose Carlos [1 ,2 ]
Leal, Jose Paulo [1 ,2 ]
Figueira, Alvaro [1 ,2 ]
机构
[1] CRACS INESC TEC, Porto, Portugal
[2] DCC FCUP Porto, Porto, Portugal
关键词
semantic representation; source code; graph; source code analysis; automated assessment; programming; SIMILARITY;
D O I
10.2298/CSIS230615004P
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Static source code analysis techniques are gaining relevance in automated assessment of programming assignments as they can provide less rigorous evaluation and more comprehensive and formative feedback. These techniques focus on source code aspects rather than requiring effective code execution. To this end, syntactic and semantic information encoded in textual data is typically represented internally as graphs, after parsing and other preprocessing stages. Static automated assessment techniques, therefore, draw inferences from intermediate representations to determine the correctness of a solution and derive feedback. Consequently, achieving the most effective semantic graph representation of source code for the specific task is critical, impacting both techniques' accuracy, outcome, and execution time. This paper aims to provide a thorough comparison of the most widespread semantic graph representations for the automated assessment of programming assignments, including usage examples, facets, and costs for each of these representations. A benchmark has been conducted to assess their cost using the Abstract Syntax Tree (AST) as a baseline. The results demonstrate that the Code Property Graph (CPG) is the most feature -rich representation, but also the largest and most space -consuming (about 33% more than AST).
引用
收藏
页码:117 / 142
页数:26
相关论文
共 36 条
  • [31] Effect of flipped classroom and automatic source code evaluation in a CS1 programming course according to the Kirkpatrick evaluation model
    Mosquera, Jose Miguel Llanos
    Suarez, Carlos Giovanny Hidalgo
    Guerrero, Victor Andres Bucheli
    EDUCATION AND INFORMATION TECHNOLOGIES, 2023, 28 (10) : 13235 - 13252
  • [32] A New Method to Increase Feedback for Programming Tasks During Automatic Evaluation Test Case Annotations in ProgCont System
    Biro, Piroska
    Kadek, Tamas
    Kosa, Mark
    Panovics, Janos
    ACTA POLYTECHNICA HUNGARICA, 2022, 19 (09) : 103 - 116
  • [33] Maintainability of Automatic Acceptance Tests for Web Applications - A Case Study Comparing Two Approaches to Organizing Code of Test Cases
    Sadaj, Aleksander
    Ochodek, Miroslaw
    Kopczynska, Sylwia
    Nawrocki, Jerzy
    SOFSEM 2020: THEORY AND PRACTICE OF COMPUTER SCIENCE, 2020, 12011 : 454 - 466
  • [34] A Clustering-Based Computational Model to Group Students With Similar Programming Skills From Automatic Source Code Analysis Using Novel Features
    Silva, Davi Bernardo
    Carvalho, Deborah Ribeiro
    Silla Jr, Carlos N.
    IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, 2024, 17 : 428 - 444
  • [35] How effective is machine translation on low-resource code-switching? A case study comparing human and automatic metrics
    Li Nguyen
    Bryant, Christopher
    Mayeux, Oliver
    Yuan, Zheng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 14186 - 14195
  • [36] Automatic Documentation of [Mined] Feature Implementations from Source Code Elements and Use-Case Diagrams with the REVPLINE Approach
    Al-Msie'deen, R.
    Huchard, M.
    Seriai, A. -D.
    INTERNATIONAL JOURNAL OF SOFTWARE ENGINEERING AND KNOWLEDGE ENGINEERING, 2014, 24 (10) : 1413 - 1438