Regulating ChatGPT and other Large Generative AI Models

被引:72
|
作者
Hacker, Philipp [1 ]
Engel, Andreas [2 ]
Mauer, Marco [3 ]
机构
[1] European Univ Viadrina, European New Sch Digital Studies, Frankfurt, Germany
[2] Heidelberg Univ, Heidelberg, Germany
[3] Humboldt Univ, Berlin, Germany
关键词
ARTIFICIAL-INTELLIGENCE; OPPORTUNITIES; CHALLENGES; MARKET;
D O I
10.1145/3593013.3594067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. The paper argues for three layers of obligations concerning LGAIMs (minimum standards for all LGAIMs; high-risk obligations for high-risk use cases; collaborations along the AI value chain). In general, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA's content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers.
引用
收藏
页码:1112 / 1123
页数:12
相关论文
共 50 条
  • [1] AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models
    Luciano Floridi
    Philosophy & Technology, 2023, 36 (1)
  • [2] ChatGPT, Large Language Models, and Generative AI as Future Augments of Surgical Cancer Care
    Kothari, A. N.
    ANNALS OF SURGICAL ONCOLOGY, 2023, 30 (06) : 3174 - 3176
  • [3] ChatGPT, Large Language Models, and Generative AI as Future Augments of Surgical Cancer Care
    A. N. Kothari
    Annals of Surgical Oncology, 2023, 30 : 3174 - 3176
  • [4] The Challenges for Regulating Medical Use of ChatGPT and Other Large Language Models
    Minssen, Timo
    Vayena, Effy
    Cohen, I. Glenn
    JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2023, 330 (04): : 315 - 316
  • [5] Using ChatGPT and other forms of generative AI in systematic reviews: Comment
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    JOURNAL OF MEDICAL IMAGING AND RADIATION SCIENCES, 2024, 55 (02) : 364 - 365
  • [6] Communicating the cultural other: trust and bias in generative AI and large language models
    Jenks, Christopher J.
    APPLIED LINGUISTICS REVIEW, 2024,
  • [7] Generative AI and simulation modeling: how should you (not) use large language models like ChatGPT
    Akhavan, Ali
    Jalali, Mohammad
    SYSTEM DYNAMICS REVIEW, 2024, 40 (03)
  • [8] Foundation Models, Generative AI, and Large Language Models
    Ross, Angela
    McGrow, Kathleen
    Zhi, Degui
    Rasmy, Laila
    CIN-COMPUTERS INFORMATICS NURSING, 2024, 42 (05) : 377 - 387
  • [9] Using ChatGPT and other forms of generative AI in systematic reviews: Challenges and opportunities
    Hossain, M. Mahbub
    JOURNAL OF MEDICAL IMAGING AND RADIATION SCIENCES, 2024, 55 (01) : 11 - 12
  • [10] Generative Artificial Intelligence Through ChatGPT and Other Large Language Models in Ophthalmology Clinical Applications and Challenges
    Tan, Ting Fang
    Thirunavukarasu, Arun James
    Campbell, J. Peter
    Keane, Pearse A.
    Pasquale, Louis R.
    Abramoff, Michael D.
    Kalpathy-Cramer, Jayashree
    Lum, Flora
    Kim, Judy E.
    Baxter, Sally L.
    Ting, Daniel Shu Wei
    OPHTHALMOLOGY SCIENCE, 2023, 3 (04):