CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models

被引:0
|
作者
Hajipour, Hossein [1 ]
Hassler, Keno [1 ]
Holz, Thorsten [1 ]
Schoenherr, Lea [1 ]
Fritz, Mario [1 ]
机构
[1] CISPA Helmholtz Ctr Informat Secur, Saarbrucken, Germany
关键词
STATIC ANALYSIS; GENERATION;
D O I
10.1109/SaTML59370.2024.00040
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Large language models (LLMs) for automatic code generation have recently achieved breakthroughs in several programming tasks. Their advances in competition-level programming problems have made them an essential pillar of AI-assisted pair programming, and tools such as GitHub Copilot have emerged as part of the daily programming workflow used by millions of developers. Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities. This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure. While these models have been extensively evaluated for their ability to produce functionally correct programs, there remains a lack of comprehensive investigations and benchmarks addressing the security aspects of these models. In this work, we propose a method to systematically study the security issues of code language models to assess their susceptibility to generating vulnerable code. To this end, we introduce the first approach to automatically find generated code that contains vulnerabilities in black-box code generation models. This involves proposing a novel few-shot prompting approach. We evaluate the effectiveness of our approach by examining code language models in generating high-risk security weaknesses. Furthermore, we use our method to create a collection of diverse non-secure prompts for various vulnerability scenarios. This dataset serves as a benchmark to evaluate and compare the security weaknesses of code language models.
引用
收藏
页码:684 / 709
页数:26
相关论文
共 35 条
  • [1] 23 Security Risks in Black-Box Large Language Model Foundation Models
    Mcgraw, Gary
    Bonett, Richie
    Figueroa, Harold
    Mcmahon, Katie
    COMPUTER, 2024, 57 (04) : 160 - 164
  • [2] Evaluation of Black-Box Web Application Security Scanners in Detecting Injection Vulnerabilities
    Althunayyan, Muzun
    Saxena, Neetesh
    Li, Shancang
    Gope, Prosanta
    ELECTRONICS, 2022, 11 (13)
  • [3] SqliGPT: Evaluating and Utilizing Large Language Models for Automated SQL Injection Black-Box Detection
    Gui, Zhiwen
    Wang, Enze
    Deng, Binbin
    Zhang, Mingyuan
    Chen, Yitao
    Wei, Shengfei
    Xie, Wei
    Wang, Baosheng
    APPLIED SCIENCES-BASEL, 2024, 14 (16):
  • [4] Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
    Rosenzweig, Julia
    Sicking, Joachim
    Houben, Sebastian
    Mock, Michael
    Akila, Maram
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 56 - 65
  • [5] SOFIA: An Automated Security Oracle for Black-Box Testing of SQL-Injection Vulnerabilities
    Ceccato, Mariano
    Nguyen, Cu D.
    Appelt, Dennis
    Briand, Lionel C.
    2016 31ST IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE), 2016, : 167 - 177
  • [6] Nonlinear system identification: finding structure in nonlinear black-box models
    Dreesen, Philippe
    Tiels, Koen
    Ishteva, Mariya
    Schoukens, Johan
    2017 IEEE 7TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), 2017,
  • [7] Open Sesame! Universal Black-Box Jailbreaking of Large Language Models
    Lapid, Raz
    Langberg, Ron
    Sipper, Moshe
    APPLIED SCIENCES-BASEL, 2024, 14 (16):
  • [8] TrojLLM: A Black-box Trojan Prompt Attack on Large Language Models
    Xue, Jiaqi
    Zheng, Mengxin
    Hua, Ting
    Shen, Yilin
    Liu, Yepeng
    Boloni, Ladislau
    Lou, Qian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] A Black-Box Attack on Code Models via Representation Nearest Neighbor Search
    Zhang, Jie
    Ma, Wei
    Hui, Qiang
    Liu, Shangqing
    Xie, Xiaofei
    Le Traon, Yves
    Liu, Yang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 9706 - 9716
  • [10] DIP: Dead code Insertion based Black-box Attack for Programming Language Model
    Na, CheolWon
    Choi, YunSeok
    Lee, Jee-Hyong
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 7777 - 7791