On Learning Invariant Representations for Domain Adaptation

被引:0
|
作者
Zhao, Han [1 ]
des Combes, Remi Tachet [2 ]
Zhang, Kun [1 ]
Gordon, Geoffrey J. [1 ,2 ]
机构
[1] Carnegie Mellon Univ, Machine Learning Dept, Pittsburgh, PA 15213 USA
[2] Microsoft Res, Montreal, PQ, Canada
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the ability of deep neural nets to learn rich representations, recent advances in unsupervised domain adaptation have focused on learning domain-invariant features that achieve a small error on the source domain. The hope is that the learnt representation, together with the hypothesis learnt from the source domain, can generalize to the target domain. In this paper, we first construct a simple counterexample showing that, contrary to common belief, the above conditions are not sufficient to guarantee successful domain adaptation. In particular, the counterexample exhibits conditional shift: the class-conditional distributions of input features change between source and target domains. To give a sufficient condition for domain adaptation, we propose a natural and interpretable generalization upper bound that explicitly takes into account the aforementioned shift. Moreover, we shed new light on the problem by proving an information-theoretic lower bound on the joint error of any domain adaptation method that attempts to learn invariant representations. Our result characterizes a fundamental tradeoff between learning invariant representations and achieving small joint error on both domains when the marginal label distributions differ from source to target. Finally, we conduct experiments on real-world datasets that corroborate our theoretical findings. We believe these insights are helpful in guiding the future design of domain adaptation and representation learning algorithms.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Learning Domain Invariant Word Representations for Parsing Domain Adaptation
    Qiao, Xiuming
    Zhang, Yue
    Zhao, Tiejun
    [J]. NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING (NLPCC 2019), PT I, 2019, 11838 : 801 - 813
  • [2] Learning Invariant Representations and Risks for Semi-supervised Domain Adaptation
    Li, Bo
    Wang, Yezhen
    Zhang, Shanghang
    Li, Dongsheng
    Keutzer, Kurt
    Darrell, Trevor
    Zhao, Han
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1104 - 1113
  • [3] Select, Label, and Mix: Learning Discriminative Invariant Feature Representations for Partial Domain Adaptation
    Sahoo, Aadarsh
    Panda, Rameswar
    Feris, Rogerio
    Saenko, Kate
    Das, Abir
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 4199 - 4208
  • [4] Domain-Invariant Feature Learning for Domain Adaptation
    Tu, Ching-Ting
    Lin, Hsiau-Wen
    Lin, Hwei Jen
    Tokuyama, Yoshimasa
    Chu, Chia-Hung
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2023, 37 (03)
  • [5] Learning an Invariant Hilbert Space for Domain Adaptation
    Herath, Samitha
    Harandi, Mehrtash
    Porikli, Fatih
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3956 - 3965
  • [6] Learning Semantic Representations for Unsupervised Domain Adaptation
    Xie, Shaoan
    Zheng, Zibin
    Chen, Liang
    Chen, Chuan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [7] Learning explicitly transferable representations for domain adaptation
    Jing, Mengmeng
    Li, Jingjing
    Lu, Ke
    Zhu, Lei
    Yang, Yang
    [J]. NEURAL NETWORKS, 2020, 130 : 39 - 48
  • [8] Learning Causal Representations for Robust Domain Adaptation
    Yang, Shuai
    Yu, Kui
    Cao, Fuyuan
    Liu, Lin
    Wang, Hao
    Li, Jiuyong
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (03) : 2750 - 2764
  • [9] Learning Transferrable Representations for Unsupervised Domain Adaptation
    Sener, Ozan
    Song, Hyun Oh
    Saxena, Ashutosh
    Savarese, Silvio
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [10] Learning domain invariant representations of heterogeneous image data
    Mihailo Obrenović
    Thomas Lampert
    Miloš Ivanović
    Pierre Gançarski
    [J]. Machine Learning, 2023, 112 : 3659 - 3684