Mitigating Insecure Outputs in Large Language Models(LLMs): A Practical Educational Module

被引:0
|
作者
Barek, Md Abdul [1 ]
Rahman, Md Mostafizur [2 ]
Akter, Mst Shapna [1 ]
Riad, A. B. M. Kamrul Islam [1 ]
Rahman, Md Abdur [1 ]
Shahriar, Hossain [3 ]
Rahman, Akond [4 ]
Wu, Fan [5 ]
机构
[1] Univ West Florida, Dept Intelligent Syst & Robot, Pensacola, FL 32514 USA
[2] Univ West Florida, Dept Cybersecur & Informat Technol, Pensacola, FL USA
[3] Univ West Florida, Ctr Cybersecur, Pensacola, FL USA
[4] Auburn Univ, Comp Sci & Software Engn, Auburn, AL USA
[5] Tuskegee Univ, Dept Comp Sci, Tuskegee, AL USA
基金
美国国家科学基金会;
关键词
Large Language Models; Cybersecurity; Insecure Output; Sanitization; Authentic Learning;
D O I
10.1109/COMPSAC61105.2024.00389
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs) have extensive ability to produce promising output. Nowadays, people are increasingly relying on them due to easy accessibility, rapid and outstanding outcomes. However, the use of these results without appropriate scrutiny poses serious security risks, particularly when they are integrated with other software, APIs, or plugins. This is because the LLM outputs are highly dependent on the prompts they receive. Therefore, it is essential to carefully clean these outputs before using them in additional software environments. This paper is designed to teach students about the potential dangers of contaminated LLM output within the context of web development through prelab, handson, and postlab experiences. Hands-on lab provides practical guidance on how to handle LLM vulnerabilities to make applications safe with some real-world examples in Python. This approach aims to provide students with a deeper understanding of the precautions necessary to ensure software against the vulnerabilities introduced by LLM output.
引用
收藏
页码:2424 / 2429
页数:6
相关论文
共 50 条