Fairness in Survival Analysis with Distributionally Robust Optimization

被引:0
|
作者
Hu, Shu [1 ]
Chen, George H. [2 ]
机构
[1] Purdue Univ, Dept Comp & Informat Technol, Indianapolis, IN 46202 USA
[2] Carnegie Mellon Univ, Heinz Coll Informat Syst & Publ Policy, Pittsburgh, PA 15213 USA
关键词
survival analysis; fairness; distributionally robust optimization; MODEL;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a general approach for encouraging fairness in survival analysis models that is based on minimizing a worst-case error across all subpopulations that are "large enough" (occurring with at least a user-specified probability threshold). This approach can be used to convert a wide variety of existing survival analysis models into ones that simultaneously encourage fairness, without requiring the user to specify which attributes or features to treat as sensitive in the training loss function. From a technical standpoint, our approach applies recent methodological developments of distributionally robust optimization (DRO) to survival analysis. The complication is that existing DRO theory uses a training loss function that decomposes across contributions of individual data points, i.e., any term that shows up in the loss function depends only on a single training point. This decomposition does not hold for commonly used survival loss functions, including for the standard Cox proportional hazards model, its deep neural network variants, and many other recently developed survival analysis models that use loss functions involving ranking or similarity score calculations. We address this technical hurdle using a sample splitting strategy. We demonstrate our sample splitting DRO approach by using it to create fair versions of a diverse set of existing survival analysis models including the classical Cox model (and its deep neural network variant DeepSurv), the discrete-time model DeepHit, and the neural ODE model SODEN. We also establish a finite-sample theoretical guarantee to show what our sample splitting DRO loss converges to. Specifically for the Cox model, we further derive an exact DRO approach that does not use sample splitting. For all the survival models that we convert into DRO variants, we show that the DRO variants often score better on recently established fairness metrics (without incurring a significant drop in accuracy) compared to existing survival analysis fairness regularization techniques, including ones which directly use sensitive demographic information in their training loss functions.
引用
收藏
页码:1 / 85
页数:85
相关论文
共 50 条
  • [21] A framework of distributionally robust possibilistic optimization
    Romain Guillaume
    Adam Kasperski
    Paweł Zieliński
    Fuzzy Optimization and Decision Making, 2024, 23 : 253 - 278
  • [22] Distributionally robust possibilistic optimization problems
    Guillaume, Romain
    Kasperski, Adam
    Zielinski, Pawel
    FUZZY SETS AND SYSTEMS, 2023, 454 : 56 - 73
  • [23] Distributionally Robust Optimization in Possibilistic Setting
    Guillaume, Romain
    Kasperski, Adam
    Zielinski, Pawel
    IEEE CIS INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS 2021 (FUZZ-IEEE), 2021,
  • [24] Distributionally Robust Optimization with Markovian Data
    Li, Mengmeng
    Sutter, Tobias
    Kuhn, Daniel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [25] Distributionally Robust Bayesian Optimization with φ-divergences
    Husain, Hisham
    Vu Nguyen
    van den Hengel, Anton
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [26] Distributionally Robust Optimization with Data Geometry
    Liu, Jiashuo
    Wu, Jiayun
    Li, Bo
    Cui, Peng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [27] Distributionally Robust Optimization Under Distorted Expectations
    Cai, Jun
    Li, Jonathan Yu-Meng
    Mao, Tiantian
    OPERATIONS RESEARCH, 2023,
  • [28] Globalized distributionally robust optimization based on samples
    Yueyao Li
    Wenxun Xing
    Journal of Global Optimization, 2024, 88 : 871 - 900
  • [29] Distributionally Robust Optimization and Generalization in Kernel Methods
    Staib, Matthew
    Jegelka, Stefanie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [30] Differential Privacy via Distributionally Robust Optimization
    Selvi, Aras
    Liu, Huikang
    Wiesemann, Wolfram
    OPERATIONS RESEARCH, 2025,