Between risk mitigation and labour rights enforcement: Assessing the transatlantic race to govern AI-driven decision-making through a comparative lens

被引:0
|
作者
Aloisi, Antonio [1 ]
De Stefano, Valerio [2 ]
机构
[1] IE Univ Law Sch, Madrid, Spain
[2] York Univ, Osgoode Hall Law Sch, Canada Res Innovat Law & Societ, Toronto, ON, Canada
关键词
artificial intelligence; risk-based approach; algorithmic management; platform work; automated decision-making; data protection; impact assessment; comparative analysis; DATA PROTECTION; LAW;
D O I
10.1177/20319525231167982
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
In this article, we provide an overview of efforts to regulate the various phases of the artificial intelligence (AI) life cycle. In doing so, we examine whether-and, if so, to what extent-highly fragmented legal frameworks are able to provide safeguards capable of preventing the dangers that stem from AI- and algorithm-driven organisational practices. We critically analyse related developments at the European Union (EU) level, namely the General Data Protection Regulation, the draft AI Regulation, and the proposal for a Directive on improving working conditions in platform work. We also consider bills and regulations proposed or adopted in the United States and Canada via a transatlantic comparative approach, underlining analogies and variations between EU and North American attitudes towards the risk assessment and management of AI systems. We aim to answer the following questions: Is the widely adopted risk-based approach fit for purpose? Is it consistent with the actual enforcement of fundamental rights at work, such as privacy, human dignity, equality and collective rights? To answer these questions, in section 2 we unpack the various, often ambiguous, facets of the notion(s) of 'risk'-that is, the common denominator with the EU and North American legal instruments. Here, we determine that a scalable, decentralised framework is not appropriate for ensuring the enforcement of constitutional labour-related rights. In addition to presenting the key provisions of existing schemes in the EU and North America, in section 3 we disentangle the consistencies and tensions between the frameworks that regulate AI and constrain how it must be handled in specific contexts, such as work environments and platform-orchestrated arrangements. Paradoxically, the frenzied race to regulate AI-driven decision-making could exacerbate the current legal uncertainty and pave the way for regulatory arbitrage. Such a scenario would slow technological innovation and egregiously undermine labour rights. Thus, in section 4 we advocate for the adoption of a dedicated legal instrument at the supranational level to govern technologies that manage people in workplaces. Given the high stakes involved, we conclude by stressing the salience of a multi-stakeholder AI governance framework.
引用
收藏
页码:283 / 307
页数:25
相关论文
共 2 条
  • [1] Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making
    Wysocki, Oskar
    Davies, Jessica Katharine
    Vigo, Markel
    Armstrong, Anne Caroline
    Landers, Donal
    Lee, Rebecca
    Freitas, Andre
    [J]. ARTIFICIAL INTELLIGENCE, 2023, 316
  • [2] AI-Driven Risk Management and Sustainable Decision-Making: Role of Perceived Environmental Responsibility
    Khalid, Jamshed
    Chuanmin, Mi
    Altaf, Fasiha
    Shafqat, Muhammad Mobeen
    Khan, Shahid Kalim
    Ashraf, Muhammad Umair
    [J]. SUSTAINABILITY, 2024, 16 (16)