Enhancing user prompt confidentiality in Large Language Models through advanced differential encryption

被引:2
|
作者
Gupta, Brij B. [1 ,2 ,3 ,4 ,5 ]
Gaurav, Akshat [6 ]
Arya, Varsha [7 ,8 ]
Alhalabi, Wadee [9 ]
Alsalman, Dheyaaldin [10 ]
Vijayakumar, P. [11 ]
机构
[1] Asia Univ, Int Ctr AI & Cyber Secur Res & Innovat CCRI, Taichung, Taiwan
[2] Asia Univ, Dept Comp Sci & Informat Engn, Taichung, Taiwan
[3] Kyung Hee Univ, 26 Kyungheedae Ro, Seoul, South Korea
[4] Symbiosis Int Univ, Symbiosis Ctr Informat Technol SCIT, Pune, India
[5] Univ Petr & Energy Studies UPES, Ctr Interdisciplinary Res, Dehra Dun, India
[6] Ronin Inst, Montclair, NJ USA
[7] Asia Univ, Dept Business Adm, Taichung, Taiwan
[8] Lebanese Amer Univ, Dept Elect & Comp Engn, Beirut 1102, Lebanon
[9] King Abdulaziz Univ, Dept Comp Sci, Immers Virtual Real Res Grp, Jeddah, Saudi Arabia
[10] Dar Al Hekma Univ, Sch Engn Comp & Informat, Jeddah, Saudi Arabia
[11] Univ Coll Engn Tindivanam, Dept Comp Sci & Engn, Tindivanam 604001, Tamil Nadu, India
关键词
Cryptographic privacy; Large Language Models; Data anonymization; Secure AI framework; Personal data protection; AUTHENTICATION PROTOCOL; DESIGN;
D O I
10.1016/j.compeleceng.2024.109215
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In the era of artificial intelligence (AI) advancements heralded by Large Language Models (LLMs) like GPT-3, the capacity to parse and generate human -like text brings to light substantial privacy concerns. These arise notably from LLMs' reliance on vast datasets often laden with personal information, underscoring the potential for inadvertent memorization and disclosure of sensitive data. Addressing these pivotal privacy concerns, our research introduces a novel two -fold approach aimed at bolstering the confidentiality and security of user data in LLM applications. Firstly, we deploy advanced cryptographic techniques, incorporating bespoke encryption and hashing protocols, to preprocess user data. This strategy effectively anonymizes personal identifiers prior to their processing by LLMs, directly tackling the challenges of sensitive information exposure. Concurrently, our methodology encompasses a secure mutual authentication protocol utilizing lightweight cryptographic measures. This ensures that system interactions are strictly reserved for authenticated users, thereby enhancing overall data security. Collectively, our approach not only preserves the utility of data for AI tasks but also fortifies the privacy framework surrounding LLMs, significantly reducing the likelihood of privacy breaches and steering AI development towards a more secure and ethically grounded future.
引用
下载
收藏
页数:13
相关论文
共 50 条
  • [31] TrojLLM: A Black-box Trojan Prompt Attack on Large Language Models
    Xue, Jiaqi
    Zheng, Mengxin
    Hua, Ting
    Shen, Yilin
    Liu, Yepeng
    Boloni, Ladislau
    Lou, Qian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [32] A Security Risk Taxonomy for Prompt-Based Interaction With Large Language Models
    Derner, Erik
    Batistic, Kristina
    Zahalka, Jan
    Babuska, Robert
    IEEE ACCESS, 2024, 12 : 126176 - 126187
  • [33] Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models
    Duan, Haonan
    Dziedzic, Adam
    Papernot, Nicolas
    Boenisch, Franziska
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] Prompt engineering on leveraging large language models in generating response to InBasket messages
    Yan, Sherry
    Knapp, Wendi
    Leong, Andrew
    Kadkhodazadeh, Sarira
    Das, Souvik
    Jones, Veena G.
    Clark, Robert
    Grattendick, David
    Chen, Kevin
    Hladik, Lisa
    Fagan, Lawrence
    Chan, Albert
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (10) : 2263 - 2270
  • [35] Interpretable Online Log Analysis Using Large Language Models with Prompt Strategies
    Liu, Yilun
    Tao, Shimin
    Meng, Weibin
    Wang, Jingyu
    Ma, Wenbing
    Chen, Yuhang
    Zhao, Yanqing
    Yang, Hao
    Jiang, Yanfei
    PROCEEDINGS 2024 32ND IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION, ICPC 2024, 2024, : 35 - 46
  • [36] Towards Taming Large Language Models with Prompt Templates for Legal GRL Modeling
    de Kinderen, Sybren
    Winter, Karolin
    ENTERPRISE, BUSINESS-PROCESS AND INFORMATION SYSTEMS MODELING, BPMDS 2024, EMMSAD 2024, 2024, 511 : 213 - 228
  • [37] DrugReAlign: a multisource prompt framework for drug repurposing based on large language models
    Jinhang Wei
    Linlin Zhuo
    Xiangzheng Fu
    XiangXiang Zeng
    Li Wang
    Quan Zou
    Dongsheng Cao
    BMC Biology, 22 (1)
  • [38] Biomedical knowledge graph-optimized prompt generation for large language models
    Soman, Karthik
    Rose, Peter W.
    Morris, John H.
    Akbas, Rabia E.
    Smith, Brett
    Peetoom, Braian
    Villouta-Reyes, Catalina
    Cerono, Gabriel
    Shi, Yongmei
    Rizk-Jackson, Angela
    Israni, Sharat
    Nelson, Charlotte A.
    Huang, Sui
    Baranzini, Sergio E.
    BIOINFORMATICS, 2024, 40 (09)
  • [39] Turning Large Language Models into AI Assistants for Startups Using Prompt Patterns
    Wang, Xiaofeng
    Attal, Mohammad Idris
    Rafiq, Usman
    Hubner-Benz, Sylvia
    AGILE PROCESSES IN SOFTWARE ENGINEERING AND EXTREME PROGRAMMING - WORKSHOPS, XP 2022 WORKSHOPS, XP 2023 WORKSHOPS, 2024, 489 : 192 - 200
  • [40] Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints
    Lu, Albert
    Zhang, Hongxin
    Zhang, Yanzhe
    Wang, Xuezhi
    Yang, Diyi
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1982 - 2008