Invariant Representations without Adversarial Training

被引:0
|
作者
Moyer, Daniel [1 ]
Gao, Shuyang [1 ]
Brekelmans, Rob [1 ]
Steeg, Greg Ver [1 ]
Galstyan, Aram [1 ]
机构
[1] Univ Southern Calif, Informat Sci Inst, Los Angeles, CA 90089 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representations of data that are invariant to changes in specified factors are useful for a wide range of problems: removing potential biases in prediction problems, controlling the effects of covariates, and disentangling meaningful factors of variation. Unfortunately, learning representations that exhibit invariance to arbitrary nuisance factors yet remain useful for other tasks is challenging. Existing approaches cast the trade-off between task performance and invariance in an adversarial way, using an iterative minimax optimization. We show that adversarial training is unnecessary and sometimes counter-productive; we instead cast invariant representation learning as a single information-theoretic objective that can be directly optimized. We demonstrate that this approach matches or exceeds performance of state-of-the-art adversarial approaches for learning fair representations and for generative modeling with controllable transformations.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Learning Signer-Invariant Representations with Adversarial Training
    Ferreira, Pedro M.
    Pernes, Diogo
    Rebelo, Ana
    Cardoso, Jaime S.
    TWELFTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2019), 2020, 11433
  • [2] Invariant Representations through Adversarial Forgetting
    Jaiswal, Ayush
    Moyer, Daniel
    Ver Steeg, Greg
    AbdAlmageed, Wael
    Natarajan, Premkumar
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 4272 - 4279
  • [3] Learning Invariant Representations From EEG via Adversarial Inference
    Ozdenizci, Ozan
    Wang, Ye
    Koike-Akino, Toshiaki
    Erdogmus, Deniz
    IEEE ACCESS, 2020, 8 : 27074 - 27085
  • [4] SPEAKER-INVARIANT TRAINING VIA ADVERSARIAL LEARNING
    Meng, Zhong
    Li, Jinyu
    Chen, Zhuo
    Zhao, Yong
    Mazalov, Vadim
    Gong, Yifan
    Juang, Biing-Hwang
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5969 - 5973
  • [5] ATTENTIVE ADVERSARIAL LEARNING FOR DOMAIN-INVARIANT TRAINING
    Meng, Zhong
    Li, Jinyu
    Gong, Yifan
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6740 - 6744
  • [6] Efficient Adversarial Defense without Adversarial Training: A Batch Normalization Approach
    Zhu, Yao
    Wei, Xiao
    Zhu, Yue
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] Better Representations via Adversarial Training in Pre-Training: A Theoretical Perspective
    Xing, Yue
    Lin, Xiaofeng
    Song, Qifan
    Xu, Yi
    Zeng, Belinda
    Cheng, Guang
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [8] Adversarial Training Helps Transfer Learning via Better Representations
    Deng, Zhun
    Zhang, Linjun
    Vodrahalli, Kailas
    Kawaguchi, Kenji
    Zou, James
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] Disentangling factors of variation in deep representations using adversarial training
    Mathieu, Michael
    Zhao, Junbo
    Sprechmann, Pablo
    Ramesh, Aditya
    Lecun, Yann
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [10] Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representations
    Xin, Ji
    Xiong, Chenyan
    Srinivasan, Ashwin
    Sharma, Ankita
    Jose, Damien
    Bennett, Paul N.
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 4008 - 4020