CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code

被引:0
|
作者
Zhou, Shuyan [1 ]
Alon, Uri [1 ,2 ]
Agarwal, Sumit [1 ]
Neubig, Graham [1 ]
机构
[1] Carnegie Mellon Univ, Language Technol Inst, Pittsburgh, PA 15213 USA
[2] Google DeepMind, London, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since the rise of neural natural-language-to-code models (NL -> Code) that can generate long expressions and statements rather than a single next-token, one of the major problems has been reliably evaluating their generated output. In this paper, we propose CodeBERTScore: an evaluation metric for code generation, which builds on BERTScore (Zhang et al., 2020). Instead of encoding only the generated tokens as in BERTScore, CodeBERTScore also encodes the natural language input preceding the generated code, thus modeling the consistency between the generated code and its given natural language context as well. We perform an extensive evaluation of CodeBERTScore across four programming languages. We find that CodeBERTScore achieves a higher correlation with human preference and with functional correctness than all existing metrics. That is, generated code that receives a higher score by CodeBERTScore is more likely to be preferred by humans, as well as to function correctly when executed. We release five language-specific pretrained models to use with our publicly available code. Our language-specific models have been downloaded more than 1,000,000 times from the Huggingface Hub.(1)
引用
收藏
页码:13921 / 13937
页数:17
相关论文
共 50 条
  • [41] Code generation from declarative models of robotics solvers
    Frigerio, Marco
    Scioni, Enea
    Pazderski, Pawel Piotr
    Bruyninckx, Herman
    2019 THIRD IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC 2019), 2019, : 369 - 372
  • [42] Code Generation as a Dual Task of Code Summarization
    Wei, Bolin
    Li, Ge
    Xia, Xin
    Fu, Zhiyi
    Jin, Zhi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [43] Sound code generation from communicating hybrid models
    Hur, Y
    Kim, J
    Lee, I
    Choi, JY
    HYBRID SYSTEMS: COMPUTATION AND CONTROL, PROCEEDINGS, 2004, 2993 : 432 - 447
  • [44] Refactoring Sequence Diagrams for Code Generation in UML Models
    Chitra, M. T.
    Sherly, Elizabeth
    2014 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATIONS AND INFORMATICS (ICACCI), 2014, : 208 - 212
  • [45] Code Generation from Supervised Code Embeddings
    Hu, Han
    Chen, Qiuyuan
    Liu, Zhaoyi
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT IV, 2019, 1142 : 388 - 396
  • [46] Who Wrote this Code? Watermarking for Code Generation
    Lee, Taehyun
    Hong, Seokhee
    Ahn, Jaewoo
    Hong, Ilgee
    Lee, Hwaran
    Yun, Sangdoo
    Shin, Jamin
    Kim, Gunhee
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 4890 - 4911
  • [47] Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models
    Vaithilingam, Priyan
    Zhang, Tianyi
    Glassman, Elena L.
    EXTENDED ABSTRACTS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2022, 2022,
  • [48] L2CEval: Evaluating Language-to-Code Generation Capabilities of Large Language Models
    Ni, Ansong
    Yin, Pengcheng
    Zhao, Yilun
    Riddell, Martin
    Feng, Troy
    Shen, Rui
    Yin, Stephen
    Liu, Ye
    Yavuz, Semih
    Xiong, Caiming
    Joty, Shafiq
    Zhou, Yingbo
    Radev, Dragomir
    Cohan, Arman
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2024, 12 : 1311 - 1329
  • [49] Poisoned source code detection in code models
    Ghannoum, Ehab
    Ghafari, Mohammad
    JOURNAL OF SYSTEMS AND SOFTWARE, 2025, 226
  • [50] Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation
    Liu, Jiawei
    Xia, Chunqiu Steven
    Wang, Yuyao
    Zhang, Lingming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,