Improving Unsupervised Domain Adaptation with Variational Information Bottleneck

被引:2
|
作者
Song, Yuxuan [1 ]
Yu, Lantao [2 ]
Cao, Zhangjie [2 ]
Zhou, Zhiming [1 ]
Shen, Jian [1 ]
Shao, Shuo [1 ]
Zhang, Weinan [1 ]
Yu, Yong [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Stanford Univ, Stanford, CA 94305 USA
基金
中国国家自然科学基金;
关键词
D O I
10.3233/FAIA200257
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain adaptation aims to leverage the supervision signal of source domain to obtain an accurate model for target domain, where the labels are not available. To leverage and adapt the label information from source domain, most existing methods employ a feature extracting function and match the marginal distributions of source and target domains in a shared feature space. In this paper, from the perspective of information theory, we show that representation matching is actually an insufficient constraint on the feature space for obtaining a model with good generalization performance in target domain. We then propose variational bottleneck domain adaptation (VBDA), a new domain adaptation method which improves feature transferability by explicitly enforcing the feature extractor to ignore the task-irrelevant factors and focus on the information that is essential to the task of interest for both source and target domains. Extensive experimental results demonstrate that VBDA significantly outperforms state-of-the-art methods across three domain adaptation benchmark datasets.
引用
收藏
页码:1499 / 1506
页数:8
相关论文
共 50 条
  • [31] Discriminative information preservation: A general framework for unsupervised visual Domain Adaptation
    Sanodiya, Rakesh Kumar
    Yao, Leehter
    KNOWLEDGE-BASED SYSTEMS, 2021, 227
  • [32] Unsupervised Multi-Target Domain Adaptation: An Information Theoretic Approach
    Gholami, Behnam
    Sahu, Pritish
    Rudovic, Ognjen
    Bousmalis, Konstantinos
    Pavlovic, Vladimir
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 3993 - 4002
  • [33] Unsupervised Domain Adaptation by Backpropagation
    Ganin, Yaroslav
    Lempitsky, Victor
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37, 2015, 37 : 1180 - 1189
  • [34] Improving diversity and discriminability based implicit contrastive learning for unsupervised domain adaptation
    Xu, Heng
    Shi, Chuanqi
    Fan, Wenze
    Chen, Zhenghan
    APPLIED INTELLIGENCE, 2024, 54 (20) : 10007 - 10017
  • [35] Improving pseudo labels with intra-class similarity for unsupervised domain adaptation
    Wang, Jie
    Zhang, Xiao-Lei
    PATTERN RECOGNITION, 2023, 138
  • [36] Improving unsupervised domain adaptation through class-conditional compact representations
    Rostami M.
    Neural Computing and Applications, 2024, 36 (25) : 15237 - 15254
  • [37] Relevant sparse codes with variational information bottleneck
    Chalk, Matthew
    Marre, Olivier
    Tkacik, Gasper
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [38] Graph Structure Learning with Variational Information Bottleneck
    Sun, Qingyun
    Li, Jianxin
    Peng, Hao
    Wu, Jia
    Fu, Xingcheng
    Ji, Cheng
    Yu, Philip S.
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 4165 - 4174
  • [39] A Comparison of Variational Bounds for the Information Bottleneck Functional
    Geiger, Bernhard C.
    Fischer, Ian S.
    ENTROPY, 2020, 22 (11) : 1 - 12
  • [40] Unsupervised domain adaptation with progressive adaptation of subspaces
    Li, Weikai
    Chen, Songcan
    PATTERN RECOGNITION, 2022, 132