RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model

被引:1
|
作者
Lu, Yao [1 ]
Liu, Shang [1 ]
Zhang, Qijun [1 ]
Xie, Zhiyao [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ASP-DAC58780.2024.10473904
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Inspired by the recent success of large language models (LLMs) like ChatGPT, researchers start to explore the adoption of LLMs for agile hardware design, such as generating design RTL based on natural-language instructions. However, in existing works, their target designs are all relatively simple and in a small scale, and proposed by the authors themselves, making a fair comparison among different LLM solutions challenging. In addition, many prior works only focus on the design correctness, without evaluating the design qualities of generated design RTL. In this work, we propose an open-source benchmark named RTLLM, for generating design RTL with natural language instructions. To systematically evaluate the auto-generated design RTL, we summarized three progressive goals, named syntax goal, functionality goal, and design quality goal. This benchmark can automatically provide a quantitative evaluation of any given LLM-based solution. Furthermore, we propose an easy-to-use yet surprisingly effective prompt engineering technique named self-planning, which proves to significantly boost the performance of GPT-3.5 in our proposed benchmark.
引用
收藏
页码:722 / 727
页数:6
相关论文
共 50 条
  • [1] SCCL: An open-source SystemC to RTL translator
    Wu, Zhuanhao
    Gokhale, Maya
    Lloyd, Scott
    Patel, Hiren
    2023 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES, FCCM, 2023, : 23 - 33
  • [2] Servicing open-source large language models for oncology
    Ray, Partha Pratim
    ONCOLOGIST, 2024,
  • [3] Challenges in Building An Open-source Flow from RTL to Bundled-Data Design
    Zhang, Yang
    Cheng, Huimei
    Chen, Dake
    Fu, Huayu
    Agarwal, Shikhanshu
    Lin, Mark
    Beerel, Peter A.
    2018 24TH IEEE INTERNATIONAL SYMPOSIUM ON ASYNCHRONOUS CIRCUITS AND SYSTEMS (ASYNC), 2018, : 26 - 27
  • [4] The design of an open-source carbonate reservoir model
    Gomes, Jorge Costa
    Geiger, Sebastian
    Arnold, Daniel
    PETROLEUM GEOSCIENCE, 2022, 28 (03)
  • [5] A tutorial on open-source large language models for behavioral science
    Hussain, Zak
    Binz, Marcel
    Mata, Rui
    Wulff, Dirk U.
    BEHAVIOR RESEARCH METHODS, 2024, : 8214 - 8237
  • [6] An open-source fine-tuned large language model for radiological impression generation: a multi-reader performance study
    Adrian Serapio
    Gunvant Chaudhari
    Cody Savage
    Yoo Jin Lee
    Maya Vella
    Shravan Sridhar
    Jamie Lee Schroeder
    Jonathan Liu
    Adam Yala
    Jae Ho Sohn
    BMC Medical Imaging, 24 (1)
  • [7] Open-Source Bitstream Generation
    Soni, Ritesh Kumar
    Steiner, Neil
    French, Matthew
    2013 IEEE 21ST ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM), 2013, : 105 - 112
  • [8] Preliminary Systematic Review of Open-Source Large Language Models in Education
    Lin, Michael Pin-Chuan
    Chang, Daniel
    Hall, Sarah
    Jhajj, Gaganpreet
    GENERATIVE INTELLIGENCE AND INTELLIGENT TUTORING SYSTEMS, PT I, ITS 2024, 2024, 14798 : 68 - 77
  • [9] Design and Verification of an open-source SFU model for GPGPUs
    Rodriguez Condia, Josie E.
    Guerrero-Balaguera, Juan-David
    Moreno-Manrique, Cristhian-Fernando
    Sonza Reorda, Matteo
    2020 17TH BIENNIAL BALTIC ELECTRONICS CONFERENCE (BEC), 2020,
  • [10] μBench: An Open-Source Factory of Benchmark Microservice Applications
    Detti, Andrea
    Funari, Ludovico
    Petrucci, Luca
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (03) : 968 - 980