RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model

被引:1
|
作者
Lu, Yao [1 ]
Liu, Shang [1 ]
Zhang, Qijun [1 ]
Xie, Zhiyao [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ASP-DAC58780.2024.10473904
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Inspired by the recent success of large language models (LLMs) like ChatGPT, researchers start to explore the adoption of LLMs for agile hardware design, such as generating design RTL based on natural-language instructions. However, in existing works, their target designs are all relatively simple and in a small scale, and proposed by the authors themselves, making a fair comparison among different LLM solutions challenging. In addition, many prior works only focus on the design correctness, without evaluating the design qualities of generated design RTL. In this work, we propose an open-source benchmark named RTLLM, for generating design RTL with natural language instructions. To systematically evaluate the auto-generated design RTL, we summarized three progressive goals, named syntax goal, functionality goal, and design quality goal. This benchmark can automatically provide a quantitative evaluation of any given LLM-based solution. Furthermore, we propose an easy-to-use yet surprisingly effective prompt engineering technique named self-planning, which proves to significantly boost the performance of GPT-3.5 in our proposed benchmark.
引用
收藏
页码:722 / 727
页数:6
相关论文
共 50 条
  • [31] Open-source intelligence and privacy by design
    Koops, Bert-Jaap
    Hoepman, Jaap-Henk
    Leenes, Ronald
    COMPUTER LAW & SECURITY REVIEW, 2013, 29 (06) : 676 - 688
  • [32] Open-source design of integrated circuits
    Fath, Patrick
    Moser, Manuel
    Zachl, Georg
    Pretl, Harald
    ELEKTROTECHNIK UND INFORMATIONSTECHNIK, 2024, 141 (01): : 76 - 87
  • [33] Open-source design of medical devices
    Jorge Otero
    Joshua M. Pearce
    David Gozal
    Ramon Farré
    Nature Reviews Bioengineering, 2024, 2 (4): : 280 - 281
  • [34] Automatic structuring of radiology reports with on-premise open-source large language models
    Piotr Woźnicki
    Caroline Laqua
    Ina Fiku
    Amar Hekalo
    Daniel Truhn
    Sandy Engelhardt
    Jakob Kather
    Sebastian Foersch
    Tugba Akinci D’Antonoli
    Daniel Pinto dos Santos
    Bettina Baeßler
    Fabian Christopher Laqua
    European Radiology, 2025, 35 (4) : 2018 - 2029
  • [35] Enhancing Code Security Through Open-Source Large Language Models: A Comparative Study
    Ridley, Norah
    Branca, Enrico
    Kimber, Jadyn
    Stakhanova, Natalia
    FOUNDATIONS AND PRACTICE OF SECURITY, PT I, FPS 2023, 2024, 14551 : 233 - 249
  • [36] Using the open-source statistical language R to analyze the dichotomous Rasch model
    Yuelin Li
    Behavior Research Methods, 2006, 38 : 532 - 541
  • [37] Iterative Refactoring of Real-World Open-Source Programs with Large Language Models
    Choi, Jinsu
    An, Gabin
    Yoo, Shin
    SEARCH-BASED SOFTWARE ENGINEERING, SSBSE 2024, 2024, 14767 : 49 - 55
  • [38] Evaluation of Open-Source Large Language Models for Metal-Organic Frameworks Research
    Bai, Xuefeng
    Xie, Yabo
    Zhang, Xin
    Han, Honggui
    Li, Jian-Rong
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2024, 64 (13) : 4958 - 4965
  • [39] Fine-Tuning and Evaluating Open-Source Large Language Models for the Army Domain
    Ruiz, Maj Daniel C.
    Sell, John
    arXiv,
  • [40] GPT-NeoX-20B: An Open-Source Autoregressive Language Model
    Black, Sid
    Biderman, Stella
    Hallahan, Eric
    Anthony, Quentin
    Gao, Leo
    Golding, Laurence
    He, Horace
    Leahy, Connor
    McDonell, Kyle
    Phang, Jason
    Pieler, Michael
    Prashanth, U. S. V. S. N. Sai
    Purohit, Shivanshu
    Reynolds, Laria
    Tow, Jonathan
    Wang, Ben
    Weinbach, Samuel
    PROCEEDINGS OF WORKSHOP ON CHALLENGES & PERSPECTIVES IN CREATING LARGE LANGUAGE MODELS (BIGSCIENCE EPISODE #5), 2022, : 95 - 136