Towards Safer Large Language Models (LLMs)

被引:0
|
作者
Lawrence, Carolin [1 ]
Bifulco, Roberto [1 ]
Gashteovski, Kiril [1 ]
Hung, Chia-Chien [1 ]
Ben Rim, Wiem [1 ]
Shaker, Ammar [1 ]
Oyamada, Masafumi [2 ]
Sadamasa, Kunihiko [2 ]
Enomoto, Masafumi [2 ]
Takeoka, Kunihiro [2 ]
机构
[1] NEC Laboratories Europe, Germany
[2] Data Science Laboratories
来源
NEC Technical Journal | 2024年 / 17卷 / 02期
关键词
Computational linguistics - Risk assessment;
D O I
暂无
中图分类号
学科分类号
摘要
Large Language Models (LLMs) are revolutionizing our world. They have impressive textual capabilities that will fundamentally change how human users can interact with intelligent systems. Nonetheless, they also still have a series of limitations that are important to keep in mind when working with LLMs. We explore how these limitations can be addressed from two different angles. First, we look at options that are currently already available, which include (1) assessing the risk of a use case, (2) prompting a LLM to deliver explanations and (3) encasing LLMs in a human-centred system design. Second, we look at technologies that we are currently developing, which will be able to (1) more accurately assess the quality of an LLM for a high-risk domain, (2) explain the generated LLM output by linking to the input and (3) fact check the generated LLM output against external trustworthy sources. © 2024 NEC Mediaproducts. All rights reserved.
引用
收藏
页码:64 / 74
相关论文
共 50 条
  • [1] Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models
    Lin, Zichao
    Guan, Shuyan
    Zhang, Wending
    Zhang, Huiyan
    Li, Yugang
    Zhang, Huaping
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (09)
  • [2] Lower Energy Large Language Models (LLMs)
    Lin, Hsiao-Ying
    Voas, Jeffrey
    COMPUTER, 2023, 56 (10) : 14 - 16
  • [3] LARGE LANGUAGE MODELS (LLMS) AND CHATGPT FOR BIOMEDICINE
    Arighi, Cecilia
    Brenner, Steven
    Lu, Zhiyong
    BIOCOMPUTING 2024, PSB 2024, 2024, : 641 - 644
  • [4] Large language models (LLMs) and the institutionalization of misinformation
    Garry, Maryanne
    Chan, Way Ming
    Foster, Jeffrey
    Henkel, Linda A.
    TRENDS IN COGNITIVE SCIENCES, 2024, 28 (12) : 1078 - 1088
  • [5] linguagem grande (LLMs) Linguistic ambiguity analysis in large language models (LLMs)
    Moraes, Lavinia de Carvalho
    Silverio, Irene Cristina
    Marques, Rafael Alexandre Sousa
    Anaia, Bianca de Castro
    de Paula, Dandara Freitas
    Faria, Maria Carolina Schincariol de
    Cleveston, Iury
    Correia, Alana de Santana
    Freitag, Raquel Meister Ko
    TEXTO LIVRE-LINGUAGEM E TECNOLOGIA, 2025, 18
  • [6] Recommender Systems in the Era of Large Language Models (LLMs)
    Zhao, Zihuai
    Fan, Wenqi
    Li, Jiatong
    Liu, Yunqing
    Mei, Xiaowei
    Wang, Yiqi
    Wen, Zhen
    Wang, Fei
    Zhao, Xiangyu
    Tang, Jiliang
    Li, Qing
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6889 - 6907
  • [7] Large language models (LLMs) as agents for augmented democracy
    Gudino, Jairo F.
    Grandi, Umberto
    Hidalgo, Cesar
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2024, 382 (2285):
  • [8] Are Large Language Models (LLMs) Ready for Agricultural Applications?
    Shende, Ketan
    Resource: Engineering and Technology for Sustainable World, 2025, 32 (01): : 28 - 30
  • [9] Computing Architecture for Large-Language Models (LLMs) and Large Multimodal Models (LMMs)
    Liang, Bor-Sung
    PROCEEDINGS OF THE 2024 INTERNATIONAL SYMPOSIUM ON PHYSICAL DESIGN, ISPD 2024, 2024, : 233 - 234
  • [10] Context is everything in regulatory application of large language models (LLMs)
    Tong, Weida
    Renaudin, Michael
    DRUG DISCOVERY TODAY, 2024, 29 (04)