Structuring Natural Language Requirements with Large Language Models

被引:0
|
作者
Norheim, Johannes J. [1 ]
Rebentisch, Eric [2 ]
机构
[1] MIT, Dept Aeronaut & Astronaut, Cambridge, MA 02139 USA
[2] MIT, Sociotech Syst Res Ctr, 77 Massachusetts Ave, Cambridge, MA 02139 USA
关键词
Large Language Models; Natural Language Processing; Requirements Translation; Requirements Modeling;
D O I
10.1109/REW61692.2024.00013
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Structured requirements have long been proposed to improve the quality of requirements while facilitating downstream applications like modeling. Nevertheless, in practice, adopting structures such as templates and semi-formal logic imposes additional training and expertise constraints, limiting their wider adoption. In this paper, we investigate the application of natural language processing (NLP) to translate natural language requirements into structured representations. Existing methodologies have been reported for specific formalisms or requirement types. In this research preview, we go beyond the state-of-the-art by generalizing to generic templates and semiformal logic. Specifically, we investigate the application of a state-of-the-art pre-trained large language model (LLM), GPT-4. We show preliminary evidence that this new technology can translate natural language requirements to a target template or semi-formal language based on as little as one translation example, as long as the example captures the same structure as the translated requirement. We observe this behavior across three formalisms: a requirements template structure (EARS), a custom minimalistic requirements modeling language for system performance requirements, and a semi-formal structure for linear temporal logic (LTL). We propose a rigorous way to investigate how well these observations are generalized based on this preliminary evidence.
引用
收藏
页码:68 / 71
页数:4
相关论文
共 50 条
  • [41] Large Language Models
    Vargas, Diego Collarana
    Katsamanis, Nassos
    ERCIM NEWS, 2024, (136): : 12 - 13
  • [42] Large Language Models
    Cerf, Vinton G.
    COMMUNICATIONS OF THE ACM, 2023, 66 (08) : 7 - 7
  • [43] ARCHCODE: Incorporating Software Requirements in Code Generation with Large Language Models
    Han, Hojae
    Kim, Jaejin
    Yoo, Jaeseok
    Lee, Youngwon
    Hwang, Seung-won
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 13520 - 13552
  • [44] Improving requirements completeness: automated assistance through large language models
    Dipeeka Luitel
    Shabnam Hassani
    Mehrdad Sabetzadeh
    Requirements Engineering, 2024, 29 : 73 - 95
  • [45] Agile Methodology for the Standardization of Engineering Requirements Using Large Language Models
    Ray, Archana Tikayat
    Cole, Bjorn F.
    Fischer, Olivia Pinon J.
    Bhat, Anirudh Prabhakara
    White, Ryan T.
    Mavris, Dimitri N.
    SYSTEMS, 2023, 11 (07):
  • [46] Requirements Verification Through the Analysis of Source Code by Large Language Models
    Couder, Juan Ortiz
    Gomez, Dawson
    Ochoa, Omar
    SOUTHEASTCON 2024, 2024, : 75 - 80
  • [47] Improving requirements completeness: automated assistance through large language models
    Luitel, Dipeeka
    Hassani, Shabnam
    Sabetzadeh, Mehrdad
    REQUIREMENTS ENGINEERING, 2024, 29 (01) : 73 - 95
  • [49] The Importance of Understanding Language in Large Language Models
    Youssef, Alaa
    Stein, Samantha
    Clapp, Justin
    Magnus, David
    AMERICAN JOURNAL OF BIOETHICS, 2023, 23 (10): : 6 - 7
  • [50] Dissociating language and thought in large language models
    Mahowald, Kyle
    Ivanova, Anna A.
    Blank, Idan A.
    Kanwisher, Nancy
    Tenenbaum, Joshua B.
    Fedorenko, Evelina
    TRENDS IN COGNITIVE SCIENCES, 2024, 28 (06) : 517 - 540