Automatically Assessing Code Understandability: How Far Are We?

被引:0
|
作者
Scalabrino, Simone [1 ]
Bavota, Gabriele [2 ]
Vendome, Christopher [3 ]
Linares-Vasquez, Mario [4 ]
Poshyvanyk, Denys [3 ]
Oliveto, Rocco [1 ]
机构
[1] Univ Molise, Campobasso, Italy
[2] USI, Lugano, Switzerland
[3] Coll William & Mary, Williamsburg, VA USA
[4] Univ Los Andes, Bogota, Colombia
来源
PROCEEDINGS OF THE 2017 32ND IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE'17) | 2017年
关键词
Software metrics; Code understandability; Empirical study; Negative result;
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Program understanding plays a pivotal role in software maintenance and evolution: a deep understanding of code is the stepping stone for most software-related activities, such as bug fixing or testing. Being able to measure the understandability of a piece of code might help in estimating the effort required for a maintenance activity, in comparing the quality of alternative implementations, or even in predicting bugs. Unfortunately, there are no existing metrics specifically designed to assess the understandability of a given code snippet. In this paper, we perform a first step in this direction, by studying the extent to which several types of metrics computed on code, documentation, and developers correlate with code understandability. To perform such an investigation we ran a study with 46 participants who were asked to understand eight code snippets each. We collected a total of 324 evaluations aiming at assessing the perceived understandability, the actual level of understanding, and the time needed to understand a code snippet. Our results demonstrate that none of the (existing and new) metrics we considered is able to capture code understandability, not even the ones assumed to assess quality attributes strongly related with it, such as code readability and complexity.
引用
收藏
页码:417 / 427
页数:11
相关论文
共 50 条
  • [1] Automatically Assessing Code Understandability
    Scalabrino, Simone
    Bavota, Gabriele
    Vendome, Christopher
    Linares-Vasquez, Mario
    Poshyvanyk, Denys
    Oliveto, Rocco
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2021, 47 (03) : 595 - 613
  • [2] "Automatically Assessing Code Understandability" Reanalyzed: Combined Metrics Matter
    Trockman, Asher
    Cates, Keenen
    Mozina, Mark
    Tuan Nguyen
    Kastner, Christian
    Vasilescu, Bogdan
    2018 IEEE/ACM 15TH INTERNATIONAL CONFERENCE ON MINING SOFTWARE REPOSITORIES (MSR), 2018, : 314 - 318
  • [3] Natural Language to Code: How Far Are We?
    Wang, Shangwen
    Geng, Mingyang
    Lin, Bo
    Sun, Zhensu
    Wen, Ming
    Liu, Yepang
    Li, Li
    Bissyande, Tegawende F.
    Mao, Xiaoguang
    PROCEEDINGS OF THE 31ST ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2023, 2023, : 375 - 387
  • [4] Automatically Generating Descriptive Texts in Logging Statements: How Far Are We?
    Liu, Xiaotong
    Jia, Tong
    Li, Ying
    Yu, Hao
    Yue, Yang
    Hou, Chuanjia
    PROGRAMMING LANGUAGES AND SYSTEMS, APLAS 2020, 2020, 12470 : 251 - 269
  • [5] Towards Automatically Addressing Self-Admitted Technical Debt: How Far Are We?
    Mastropaolo, Antonio
    Di Penta, Massimiliano
    Bavota, Gabriele
    2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 585 - 597
  • [6] Generation-based Code Review Automation: How Far Are We?
    Zhou, Xin
    Kim, Kisub
    Xu, Bowen
    Han, DongGyun
    He, Junda
    Lo, David
    2023 IEEE/ACM 31ST INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION, ICPC, 2023, : 215 - 226
  • [7] Automatically Recommend Code Updates: Are We There Yet?
    Liu, Yue
    Tantithamthavorn, Chakkrit
    Liu, Yonghui
    Thongtanunam, Patanamon
    Li, Li
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (08)
  • [8] How far can we aspire to consistency when assessing learning?
    Davis, Andrew
    ETHICS AND EDUCATION, 2013, 8 (03) : 217 - 228
  • [9] How far we've come; How far we have to go
    Garrett, M
    MAKING AND UNMAKING THE PROSPECTS FOR RHETORIC: SELECTED PAPERS FROM THE 1996 RHETORIC SOCIETY OF AMERICA CONFERENCE, 1997, : 43 - 48
  • [10] Automatically Assessing and Extending Code Coverage for NPM Packages
    Sun, Haiyang
    Rosa, Andrea
    Bonetta, Daniele
    Binder, Walter
    2021 IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATION OF SOFTWARE TEST (AST 2021), 2021, : 40 - 49