Proxy Discrimination in the Age of Artificial Intelligence and Big Data

被引:0
|
作者
Prince, Anya E. R. [1 ]
Schwarcz, Daniel [2 ]
机构
[1] Univ Iowa, Coll Law, Iowa City, IA 52242 USA
[2] Univ Minnesota, Law Sch, Law, Minneapolis, MN 55455 USA
关键词
GENETIC DISCRIMINATION; DISPARATE IMPACT; PATIENT PROTECTION; HEALTH-INSURANCE; ANTIDISCRIMINATION; RISK; SEX; EQUALITY; FAIR; ACT;
D O I
暂无
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
Big data and Artificial Intelligence ("AI") are revolutionizing the ways in which firms, governments, and employers classify individuals. Surprisingly, however, one of the most important threats to anti-discrimination regimes posed by this revolution is largely unexplored or misunderstood in the extant literature. This is the risk that modern algorithms will result in "proxy discrimination." Proxy discrimination is a particularly pernicious subset of disparate impact. Like all forms of disparate impact, it involves a facially neutral practice that disproportionately harms members of a protected class. But a practice producing a disparate impact only amounts to proxy discrimination when the usefulness to the discriminator of the facially neutral practice derives, at least in part, from the very fact that it produces a disparate impact. Historically, this occurred when a firm intentionally sought to discriminate against members of a protected class by relying on a proxy for class membership, such as zip code. However, proxy discrimination need not be intentional when membership in a protected class is predictive of a discriminator's facially neutral goal, making discrimination "rational." In these cases, firms may unwittingly proxy discriminate, knowing only that a facially neutral practice produces desirable outcomes. This Article argues that AI and big data are game changers when it comes to this risk of unintentional, but "rational," proxy discrimination. AIs armed with big data are inherently structured to engage in proxy discrimination whenever they are deprived of information about membership in a legally suspect class whose predictive power cannot be measured more directly by non-suspect data available to the AI. Simply denying AIs access to the most intuitive proxies for such predictive but suspect characteristics does little to thwart this process; instead it simply causes AIs to locate less intuitive proxies. For these reasons, as AIs become even smarter and big data becomes even bigger, proxy discrimination will represent an increasingly fundamental challenge to anti-discrimination regimes that seek to limit discrimination based on potentially predictive traits. Numerous anti-discrimination regimes do just that, limiting discrimination based on factors like preexisting conditions, genetics, disability, sex, and even race. This Article offers a menu of potential strategies for combatting this risk of proxy discrimination by AIs, including prohibiting the use of non-approved types of discrimination, mandating the collection and disclosure of data about impacted individuals' membership in legally protected classes, and requiring firms to employ statistical models that isolate only the predictive power of non-suspect variables.
引用
收藏
页码:1257 / 1318
页数:62
相关论文
共 50 条
  • [1] Digital inequalities in the age of artificial intelligence and big data
    Lutz, Christoph
    [J]. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES, 2019, 1 (02) : 141 - 148
  • [2] Discrimination in the age of artificial intelligence
    Heinrichs, Bert
    [J]. AI & SOCIETY, 2022, 37 (01) : 143 - 154
  • [3] Discrimination in the age of artificial intelligence
    Bert Heinrichs
    [J]. AI & SOCIETY, 2022, 37 : 143 - 154
  • [4] Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data
    Quentin André
    Ziv Carmon
    Klaus Wertenbroch
    Alia Crum
    Douglas Frank
    William Goldstein
    Joel Huber
    Leaf van Boven
    Bernd Weber
    Haiyang Yang
    [J]. Customer Needs and Solutions, 2018, 5 (1-2) : 28 - 37
  • [5] Artificial Intelligence and Big Data
    O'Leary, Daniel E.
    [J]. IEEE INTELLIGENT SYSTEMS, 2013, 28 (02) : 96 - 99
  • [6] Big data and artificial intelligence
    Schneider, Frank
    Weillner, Cornelius
    [J]. NERVENARZT, 2018, 89 (08): : 859 - 860
  • [7] Artificial Intelligence and Big Data
    Langner, Soenke
    Beller, Ebba
    Streckenbach, Felix
    [J]. KLINISCHE MONATSBLATTER FUR AUGENHEILKUNDE, 2020, 237 (12) : 1438 - 1441
  • [8] Artificial Intelligence with Big Data
    Ostrowski, David
    [J]. 2018 FIRST IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE FOR INDUSTRIES (AI4I 2018), 2018, : 124 - 125
  • [9] AI-Enabled Processes: The Age of Artificial Intelligence and Big Data
    Beheshti, Amin
    Benatallah, Boualem
    Sheng, Quan Z.
    Casati, Fabio
    Nezhad, Hamid-Reza Motahari
    Yang, Jian
    Ghose, Aditya
    [J]. SERVICE-ORIENTED COMPUTING, ICSOC 2021 WORKSHOPS, 2022, 13236 : 321 - 335
  • [10] Research on the service mode of artificial intelligence for the aged in the age of big data
    Wang, Yanxue
    [J]. BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2019, 125 : 67 - 67