CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models

被引:0
|
作者
Hajipour, Hossein [1 ]
Hassler, Keno [1 ]
Holz, Thorsten [1 ]
Schoenherr, Lea [1 ]
Fritz, Mario [1 ]
机构
[1] CISPA Helmholtz Ctr Informat Secur, Saarbrucken, Germany
关键词
STATIC ANALYSIS; GENERATION;
D O I
10.1109/SaTML59370.2024.00040
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Large language models (LLMs) for automatic code generation have recently achieved breakthroughs in several programming tasks. Their advances in competition-level programming problems have made them an essential pillar of AI-assisted pair programming, and tools such as GitHub Copilot have emerged as part of the daily programming workflow used by millions of developers. Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities. This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure. While these models have been extensively evaluated for their ability to produce functionally correct programs, there remains a lack of comprehensive investigations and benchmarks addressing the security aspects of these models. In this work, we propose a method to systematically study the security issues of code language models to assess their susceptibility to generating vulnerable code. To this end, we introduce the first approach to automatically find generated code that contains vulnerabilities in black-box code generation models. This involves proposing a novel few-shot prompting approach. We evaluate the effectiveness of our approach by examining code language models in generating high-risk security weaknesses. Furthermore, we use our method to create a collection of diverse non-secure prompts for various vulnerability scenarios. This dataset serves as a benchmark to evaluate and compare the security weaknesses of code language models.
引用
收藏
页码:684 / 709
页数:26
相关论文
共 35 条
  • [21] SELFCHECKGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
    Manakul, Potsawee
    Liusie, Adian
    Gales, Mark J. F.
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 9004 - 9017
  • [22] PLMmark: A Secure and Robust Black-Box Watermarking Framework for Pre-trained Language Models
    Li, Peixuan
    Cheng, Pengzhou
    Li, Fangqi
    Du, Wei
    Zhao, Haodong
    Liu, Gongshen
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 14991 - 14999
  • [23] Exploring the vulnerability of black-box adversarial attack on prompt-based learning in language models
    Zihao Tan
    Qingliang Chen
    Wenbin Zhu
    Yongjian Huang
    Chen Liang
    Neural Computing and Applications, 2025, 37 (3) : 1457 - 1473
  • [24] Black-Box Attack-Based Security Evaluation Framework for Credit Card Fraud Detection Models
    Xiao, Jin
    Tian, Yuhang
    Jia, Yanlin
    Jiang, Xiaoyi
    Yu, Lean
    Wang, Shouyang
    INFORMS JOURNAL ON COMPUTING, 2023, 35 (05) : 986 - 1001
  • [25] Exploiting Pre-Trained Language Models for Black-Box Attack against Knowledge Graph Embeddings
    Yang, Guangqian
    Zhang, Lei
    Liu, Yi
    Xie, Hongtao
    Mao, Zhendong
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 19 (01)
  • [26] LTM: Scalable and Black-Box Similarity-Based Test Suite Minimization Based on Language Models
    Pan, Rongqi
    Ghaleb, Taher A.
    Briand, Lionel C.
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2024, 50 (11) : 3053 - 3070
  • [27] DPDLLM: A Black-box Framework for Detecting Pre-training Data from Large Language Models
    Zhou, Baohang
    Wang, Zezhong
    Wang, Lingzhi
    Wang, Hongru
    Zhang, Ying
    Song, Kehui
    Su, Xuhui
    Wong, Kam-Fai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 644 - 653
  • [28] Robust and semantic-faithful post-hoc watermarking of text generated by black-box language models
    Hao, Jifei
    Qiang, Jipeng
    Zhu, Yi
    Li, Yun
    Yuan, Yunhao
    Hu, Xiaocheng
    Ouyang, Xiaoye
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (09)
  • [29] Robotic environmental state recognition with pre-trained vision-language models and black-box optimization
    Kawaharazuka, Kento
    Obinata, Yoshiki
    Kanazawa, Naoaki
    Okada, Kei
    Inaba, Masayuki
    ADVANCED ROBOTICS, 2024, 38 (18) : 1255 - 1264
  • [30] Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs
    Markowitz, Elan
    Ramakrishnan, Anil
    Dhamal, Jwala
    Mehrabi, Ninareh
    Peris, Charith
    Gupta, Rahul
    Chang, Kai-Wei
    Galstyan, Aram
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 12302 - 12319