Semi-supervised Domain Adaptation via Minimax Entropy

被引:381
|
作者
Saito, Kuniaki [1 ]
Kim, Donghyun [1 ]
Sclaroff, Stan [1 ]
Darrell, Trevor [2 ]
Saenko, Kate [1 ]
机构
[1] Boston Univ, Boston, MA 02215 USA
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/ICCV.2019.00814
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contemporary domain adaptation methods are very effective at aligning feature distributions of source and target domains without any target supervision. However, we show that these techniques perform poorly when even a few labeled examples are available in the target domain. To address this semi-supervised domain adaptation (SSDA) setting, we propose a novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model. Our base model consists of a feature encoding network, followed by a classification layer that computes the features' similarity to estimated prototypes (representatives of each class). Adaptation is achieved by alternately maximizing the conditional entropy of unlabeled target data with respect to the classifier and minimizing it with respect to the feature encoder. We empirically demonstrate the superiority of our method over many baselines, including conventional feature alignment and few-shot methods, setting a new state of the art for SSDA.
引用
收藏
页码:8049 / 8057
页数:9
相关论文
共 50 条
  • [21] DEEP SEMI-SUPERVISED LEARNING FOR DOMAIN ADAPTATION
    Chen, Hung-Yu
    Chien, Jen-Tzung
    2015 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, 2015,
  • [22] Semi-supervised transfer subspace for domain adaptation
    Pereira, Luis A. M.
    Torres, Ricardo da Silva
    PATTERN RECOGNITION, 2018, 75 : 235 - 249
  • [23] Knowledge Distillation for Semi-supervised Domain Adaptation
    Orbes-Arteainst, Mauricio
    Cardoso, Jorge
    Sorensen, Lauge
    Igel, Christian
    Ourselin, Sebastien
    Modat, Marc
    Nielsen, Mads
    Pai, Akshay
    OR 2.0 CONTEXT-AWARE OPERATING THEATERS AND MACHINE LEARNING IN CLINICAL NEUROIMAGING, 2019, 11796 : 68 - 76
  • [24] Inter-domain mixup for semi-supervised domain adaptation
    Li, Jichang
    Li, Guanbin
    Yu, Yizhou
    PATTERN RECOGNITION, 2024, 146
  • [25] Semi-supervised domain adaptation via Fredholm integral based kernel methods
    Wang, Wei
    Wang, Hao
    Zhang, Zhaoxiang
    Zhang, Chen
    Gao, Yang
    PATTERN RECOGNITION, 2019, 85 : 185 - 197
  • [26] Data Selection via Semi-supervised Recursive Autoencoders for SMT Domain Adaptation
    Lu, Yi
    Wong, Derek F.
    Chao, Lidia S.
    Wang, Longyue
    MACHINE TRANSLATION, CWMT 2014, 2014, 493 : 13 - 23
  • [27] Source-free semi-supervised domain adaptation via progressive Mixup
    Ma, Ning
    Wang, Haishuai
    Zhang, Zhen
    Zhou, Sheng
    Chen, Hongyang
    Bu, Jiajun
    KNOWLEDGE-BASED SYSTEMS, 2023, 262
  • [28] Semi-Supervised Domain Adaptation with Auto-Encoder via Simultaneous Learning
    Rahman, Md Mahmudur
    Panda, Rameswar
    Alam, Mohammad Arif Ul
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 402 - 411
  • [29] Feature Space Independent Semi-Supervised Domain Adaptation via Kernel Matching
    Xiao, Min
    Guo, Yuhong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (01) : 54 - 66
  • [30] Semi-supervised Heterogeneous Domain Adaptation via Disentanglement and Pseudo-labelling
    Dantas, Cassio F.
    Gaetano, Raffaele
    Ienco, Dino
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT III, ECML PKDD 2024, 2024, 14943 : 440 - 456