Regulating ChatGPT and other Large Generative AI Models

被引:72
|
作者
Hacker, Philipp [1 ]
Engel, Andreas [2 ]
Mauer, Marco [3 ]
机构
[1] European Univ Viadrina, European New Sch Digital Studies, Frankfurt, Germany
[2] Heidelberg Univ, Heidelberg, Germany
[3] Humboldt Univ, Berlin, Germany
关键词
ARTIFICIAL-INTELLIGENCE; OPPORTUNITIES; CHALLENGES; MARKET;
D O I
10.1145/3593013.3594067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. The paper argues for three layers of obligations concerning LGAIMs (minimum standards for all LGAIMs; high-risk obligations for high-risk use cases; collaborations along the AI value chain). In general, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA's content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers.
引用
收藏
页码:1112 / 1123
页数:12
相关论文
共 50 条
  • [21] The Challenges for Regulating Medical Use of ChatGPT and Other Large Language Models (vol 330, pg 315, 2023)
    Maher, Toby M.
    Ford, Paul
    Wijsenbeek, Marlies S.
    JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2023, 330 (10):
  • [22] ChatGPT and Generative AI Tools: Theft of Intellectual Labor?
    Strowel, Alain
    IIC-INTERNATIONAL REVIEW OF INTELLECTUAL PROPERTY AND COMPETITION LAW, 2023, 54 (4) : 491 - 494
  • [23] ChatGPT: tackle the growing carbon footprint of generative AI
    An, Jiafu
    Ding, Wenzhi
    Lin, Chen
    NATURE, 2023, 615 (7953) : 586 - 586
  • [24] ChatGPT Enigma: Navigating the Jumble of Surgical Generative AI
    Ray, Partha Pratim
    INDIAN JOURNAL OF SURGERY, 2024, : 229 - 230
  • [25] ChatGPT: tackle the growing carbon footprint of generative AI
    Jiafu An
    Wenzhi Ding
    Chen Lin
    Nature, 2023, 615 : 586 - 586
  • [26] Strategies for integrating ChatGPT and generative AI into clinical studies
    Jeong-Moo Lee
    Blood Research, 2024, 59 (1)
  • [27] Guest Editorial Education in the World of ChatGPT and Generative AI
    Tan, Seng Chee
    Wijekumar, Kay
    Hong, Huaqing
    Olmanson, Justin
    Twomey, Robert
    Sinha, Tanmay
    IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, 2024, 17 : 2062 - 2064
  • [28] ChatGPT and Generative AI Tools: Theft of Intellectual Labor?
    Alain Strowel
    IIC - International Review of Intellectual Property and Competition Law, 2023, 54 : 491 - 494
  • [29] ChatGPT: The transformative influence of generative AI on science and healthcare
    Varghese, Julian
    Chapiro, Julius
    JOURNAL OF HEPATOLOGY, 2024, 80 (06) : 977 - 980
  • [30] A Generative Artificial Intelligence Using Multilingual Large Language Models for ChatGPT Applications
    Tuan, Nguyen Trung
    Moore, Philip
    Thanh, Dat Ha Vu
    Pham, Hai Van
    APPLIED SCIENCES-BASEL, 2024, 14 (07):