Melting contestation: insurance fairness and machine learning

被引:0
|
作者
Barry, Laurence [1 ]
Charpentier, Arthur [2 ]
机构
[1] Fdn Inst Europlace Finance, Chaire PARI ENSAE Sci Po, Pl Bourse, F-75002 Paris, France
[2] Univ Quebec Montreal UQAM, 201, Ave President Kennedy, Montreal, PQ H2X 3Y7, Canada
关键词
Insurance ethics; Actuarial fairness; Algorithmic fairness; Machine learning biases; Insurance discrimination; CLASSIFICATIONS; EQUALITY;
D O I
10.1007/s10676-023-09720-y
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Articulation Work and Tinkering for Fairness in Machine Learning
    Fahimi, Miriam
    Russo, Mayra
    Scott, Kristen M.
    Vidal, Maria-Esther
    Berendt, Bettina
    Kinder-Kurlanda, Katharina
    Proceedings of the ACM on Human-Computer Interaction, 2024, 8 (CSCW2)
  • [22] Verifying Individual Fairness in Machine Learning Models
    John, Philips George
    Vijaykeerthy, Deepak
    Saha, Diptikalyan
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 749 - 758
  • [23] Fairness of Machine Learning Algorithms for the Black Community
    Kiemde, Sountongnoma Martial Anicet
    Kora, Ahmed Dooguy
    PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL SYMPOSIUM ON TECHNOLOGY AND SOCIETY (ISTAS), 2021, : 373 - 377
  • [24] Impact of Imputation Strategies on Fairness in Machine Learning
    Caton, Simon
    Malisetty, Saiteja
    Haas, Christian
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2022, 74 : 1011 - 1035
  • [25] Normative Principles for Evaluating Fairness in Machine Learning
    Leben, Derek
    PROCEEDINGS OF THE 3RD AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY AIES 2020, 2020, : 86 - 92
  • [26] Impact of Imputation Strategies on Fairness in Machine Learning
    Caton, Simon
    Malisetty, Saiteja
    Haas, Christian
    Journal of Artificial Intelligence Research, 2022, 74 : 1011 - 1035
  • [27] Automatic Fairness Testing of Machine Learning Models
    Sharma, Arnab
    Wehrheim, Heike
    TESTING SOFTWARE AND SYSTEMS, ICTSS 2020, 2020, 12543 : 255 - 271
  • [28] On The Impact of Machine Learning Randomness on Group Fairness
    Ganesh, Prakhar
    Chang, Hongyan
    Strobel, Martin
    Shokri, Reza
    PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, : 1789 - 1800
  • [29] AI Fairness-From Machine Learning to Federated Learning
    Patnaik, Lalit Mohan
    Wang, Wenfeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2024, 139 (02): : 1203 - 1215
  • [30] Fairness and Machine Fairness
    Castro, Clinton
    O'Brien, David
    Schwan, Ben
    AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 446 - 446