Boosting Metamorphic Relation Prediction via Code Representation Learning: An Empirical Study

被引:0
|
作者
Zheng, Xuedan [1 ]
Jiang, Mingyue [1 ]
Quan Zhou, Zhi [2 ]
机构
[1] Zhejiang Sci Tech Univ, Sch Comp Sci & Technol, Hangzhou, Zhejiang, Peoples R China
[2] Univ Wollongong, Sch Comp & Informat Technol, Wollongong, NSW, Australia
来源
关键词
deep learning; predicting metamorphic relation; source code representation;
D O I
10.1002/stvr.1889
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Metamorphic testing (MT) is an effective testing technique having a broad range of applications. One key task for MT is the identification of metamorphic relations (MRs), which is a fundamental mechanism in MT and is critical to the automation of MT. Prior studies have proposed approaches for predicting MRs (PMR). One major idea behind these PMR approaches is to represent program source code information via manually designed code features and then to apply machine-learning-based classifiers to automatically predict whether a specific MR can be applied on the target program. Nevertheless, the human-involved procedure of selecting and extracting code features is costly, and it may not be easy to obtain sufficiently comprehensive features for representing source code. To overcome this limitation, in this study, we explore and evaluate the effectiveness of code representation learning techniques for PMR. By applying neural code representation models for automatically mapping program source code to code vectors, the PMR procedure can be boosted with learned code representations. We develop 32 PMR instances by, respectively, combining 8 code representation models with 4 typical classification models and conduct an extensive empirical study to investigate the effectiveness of code representation learning techniques in the context of MR prediction. Our findings reveal that code representation learning can positively contribute to the prediction of MRs and provide insights into the practical usage of code representation models in the context of MR prediction. Our findings could help researchers and practitioners to gain a deeper understanding of the strength of code representation learning for PMR and, hence, pave the way for future research in deriving or extracting MRs from program source code. This study explores and investigates the effectiveness of code representation learning techniques for predicting MRs. Our results confirm that code representation learning is effective for predicting MRs and provide insights into the practical usage of code representation learning in the context of MR prediction. image
引用
收藏
页数:21
相关论文
共 50 条
  • [41] Disentangled Link Prediction for Signed Social Networks via Disentangled Representation Learning
    Xu, Linchuan
    Wei, Xiaokai
    Cao, Jiannong
    Yu, Philip S.
    2017 IEEE INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2017, : 676 - 685
  • [42] Aging-aware Path Timing Prediction via Graph Representation Learning
    Jia, Shuhao
    Jiang, Chuanfang
    Liang, Huan
    Yu, Jitao
    Bu, Aiguo
    2024 INTERNATIONAL SYMPOSIUM OF ELECTRONICS DESIGN AUTOMATION, ISEDA 2024, 2024, : 427 - 432
  • [43] Passenger Mobility Prediction via Representation Learning for Dynamic Directed and Weighted Graphs
    Wang, Yuandong
    Yin, Hongzhi
    Chen, Tong
    Liu, Chunyang
    Wang, Ben
    Wo, Tianyu
    Xu, Jie
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (01)
  • [44] Towards Gene Function Prediction via Multi-Networks Representation Learning
    Xue, Hansheng
    Peng, Jiajie
    Shang, Xuequn
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 10069 - 10070
  • [45] Single-step retrosynthesis prediction via multitask graph representation learning
    Zhao, Peng-Cheng
    Wei, Xue-Xin
    Wang, Qiong
    Wang, Qi-Hao
    Li, Jia-Ning
    Shang, Jie
    Lu, Cheng
    Shi, Jian-Yu
    NATURE COMMUNICATIONS, 2025, 16 (01)
  • [46] The Time-Sequence Prediction via Temporal and Contextual Contrastive Representation Learning
    Liu, Yang-Yang
    Liu, Jian-Wei
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2022, 13629 : 465 - 476
  • [47] Peer Learning via Shared Speech Representation Prediction for Target Speech Separation
    Yang, Xusheng
    Zhao, Zifeng
    Zou, Yuexian
    2024 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2024,
  • [48] A systematic mapping study of source code representation for deep learning in software engineering
    Samoaa, Hazem Peter
    Bayram, Firas
    Salza, Pasquale
    Leitner, Philipp
    IET SOFTWARE, 2022, 16 (04) : 351 - 385
  • [49] An empirical study on clone consistency prediction based on machine learning
    Zhang, Fanlong
    Khoo, Siau-cheng
    INFORMATION AND SOFTWARE TECHNOLOGY, 2021, 136
  • [50] Towards Greener Yet Powerful Code Generation via Quantization: An Empirical Study
    Wei, Xiaokai
    Gonugondla, Sujan Kumar
    Wang, Shiqi
    Ahmad, Wasi
    Ray, Baishakhi
    Qian, Haifeng
    Li, Xiaopeng
    Kumar, Varun
    Wang, Zijian
    Tian, Yuchen
    Sun, Qing
    Athiwaratkun, Ben
    Shang, Mingyue
    Ramanathan, Murali Krishna
    Bhatia, Parminder
    Xiang, Bing
    PROCEEDINGS OF THE 31ST ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2023, 2023, : 224 - 236