Learning Invariant Representations with Missing Data

被引:0
|
作者
Goldstein, Mark [1 ]
Puli, Aahlad [1 ]
Ranganath, Rajesh [1 ]
Jacobsen, Jorn-Henrik [2 ]
Chau, Olina [2 ]
Saporta, Adriel [2 ]
Miller, Andrew C. [2 ]
机构
[1] NYU, New York, NY 10003 USA
[2] Apple, Cupertino, CA USA
关键词
invariant representations; missing data; doubly robust estimator; MMD; CAUSAL INFERENCE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spurious correlations allow flexible models to predict well during training but poorly on related test populations. Recent work has shown that models that satisfy particular independencies involving correlation-inducing nuisance variables have guarantees on their test performance. Enforcing such independencies requires nuisances to be observed during training. However, nuisances, such as demographics or image background labels, are often missing. Enforcing independence on just the observed data does not imply independence on the entire population. Here we derive MMD estimators used for invariance objectives under missing nuisances. On simulations and clinical data, optimizing through these estimates achieves test performance similar to using estimators that make use of the full data.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Learning Disentangled Representations of Video with Missing Data
    Massague, Armand Comas
    Zhang, Chi
    Feric, Zlatan
    Camps, Octavia
    Yu, Rose
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [2] Learning representations of multivariate time series with missing data
    Bianchi, Filippo Maria
    Livi, Lorenzo
    Mikalsen, Karl Oyvind
    Kampffmeyer, Michael
    Jenssen, Robert
    [J]. PATTERN RECOGNITION, 2019, 96
  • [3] Learning domain invariant representations of heterogeneous image data
    Mihailo Obrenović
    Thomas Lampert
    Miloš Ivanović
    Pierre Gançarski
    [J]. Machine Learning, 2023, 112 : 3659 - 3684
  • [4] Learning domain invariant representations of heterogeneous image data
    Obrenovic, Mihailo
    Lampert, Thomas
    Ivanovic, Milos
    Gancarski, Pierre
    [J]. MACHINE LEARNING, 2023, 112 (10) : 3659 - 3684
  • [5] FedIR: Learning Invariant Representations from Heterogeneous Data in Federated Learning
    Zheng, Xi
    Xie, Hongcheng
    Guo, Yu
    Bie, Rongfang
    [J]. 2023 19TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN 2023, 2023, : 644 - 651
  • [6] Unsupervised learning of invariant representations
    Anselmi, Fabio
    Leibo, Joel Z.
    Rosasco, Lorenzo
    Mutch, Jim
    Tacchetti, Andrea
    Poggio, Tomaso
    [J]. THEORETICAL COMPUTER SCIENCE, 2016, 633 : 112 - 121
  • [7] Toward Learning Robust and Invariant Representations with Alignment Regularization and Data Augmentation
    Wang, Haohan
    Huang, Zeyi
    Wu, Xindi
    Xing, Eric
    [J]. PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 1846 - 1856
  • [8] On Learning Invariant Representations for Domain Adaptation
    Zhao, Han
    des Combes, Remi Tachet
    Zhang, Kun
    Gordon, Geoffrey J.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [9] Learning Invariant Representations with Kernel Warping
    Ma, Yingyi
    Ganapathiraman, Vignesh
    Zhang, Xinhua
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [10] Invariant Representations Learning with Future Dynamics
    Hu, Wenning
    He, Ming
    Chen, Xirui
    Wang, Nianbin
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 128