CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models

被引:0
|
作者
Hajipour, Hossein [1 ]
Hassler, Keno [1 ]
Holz, Thorsten [1 ]
Schoenherr, Lea [1 ]
Fritz, Mario [1 ]
机构
[1] CISPA Helmholtz Ctr Informat Secur, Saarbrucken, Germany
关键词
STATIC ANALYSIS; GENERATION;
D O I
10.1109/SaTML59370.2024.00040
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Large language models (LLMs) for automatic code generation have recently achieved breakthroughs in several programming tasks. Their advances in competition-level programming problems have made them an essential pillar of AI-assisted pair programming, and tools such as GitHub Copilot have emerged as part of the daily programming workflow used by millions of developers. Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities. This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure. While these models have been extensively evaluated for their ability to produce functionally correct programs, there remains a lack of comprehensive investigations and benchmarks addressing the security aspects of these models. In this work, we propose a method to systematically study the security issues of code language models to assess their susceptibility to generating vulnerable code. To this end, we introduce the first approach to automatically find generated code that contains vulnerabilities in black-box code generation models. This involves proposing a novel few-shot prompting approach. We evaluate the effectiveness of our approach by examining code language models in generating high-risk security weaknesses. Furthermore, we use our method to create a collection of diverse non-secure prompts for various vulnerability scenarios. This dataset serves as a benchmark to evaluate and compare the security weaknesses of code language models.
引用
收藏
页码:684 / 709
页数:26
相关论文
共 35 条
  • [31] B-AVIBench: Toward Evaluating the Robustness of Large Vision-Language Model on Black-Box Adversarial Visual-Instructions
    Zhang, Hao
    Shao, Wenqi
    Liu, Hong
    Ma, Yongqiang
    Luo, Ping
    Qiao, Yu
    Zheng, Nanning
    Zhang, Kaipeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 1434 - 1446
  • [32] ARL2: Aligning Retrievers for Black-box Large Language Models via Self-guided Adaptive Relevance Labeling
    Zhang, Lingxi
    Yu, Yue
    Wang, Kuan
    Zhang, Chao
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 3708 - 3719
  • [33] PRCA: Fitting Black-Box Large Language Models for Retrieval Question Answering via Pluggable Reward-Driven Contextual Adapter
    Yang, Haoyan
    Li, Zhitao
    Zhang, Yong
    Wang, Jianzong
    Cheng, Ning
    Li, Ming
    Xiao, Jing
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 5364 - 5375
  • [34] Continuous Object State Recognition for Cooking Robots Using Pre-Trained Vision-Language Models and Black-Box Optimization
    Kawaharazuka, Kento
    Kanazawa, Naoaki
    Obinata, Yoshiki
    Okada, Kei
    Inaba, Masayuki
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (05) : 4059 - 4066
  • [35] SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
    Zhang, Jiaxin
    Lie, Zhuohang
    Das, Kamalika
    Malin, Bradley
    Kumar, Sricharan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 15445 - 15458