MixtureWeight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation

被引:0
|
作者
Deng, Yuyang [1 ]
Kuzborskij, Ilja [2 ]
Mahdavi, Mehrdad [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Google DeepMind, London, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of learning a model from multiple heterogeneous sources with the goal of performing well on a new target distribution. The goal of learner is to mix these data sources in a target-distribution aware way and simultaneously minimize the empirical risk on the mixed source. The literature has made some tangible advancements in establishing theory of learning on mixture domain. However, there are still two unsolved problems. Firstly, how to estimate the optimal mixture of sources, given a target domain; Secondly, when there are numerous target domains, how to solve empirical risk minimization (ERM) for each target using possibly unique mixture of data sources in a computationally efficient manner. In this paper we address both problems efficiently and with guarantees. We cast the first problem, mixture weight estimation, as a convex-nonconcave compositional minimax problem, and propose an efficient stochastic algorithm with provable stationarity guarantees. Next, for the second problem, we identify that for certain regimes, solving ERM for each target domain individually can be avoided, and instead parameters for a target optimal model can be viewed as a non-linear function on a space of the mixture coefficients. Building upon this, we show that in the offline setting, a GD-trained overparameterized neural network can provably learn such function to predict the model of target domain instead of solving a designated ERM problem. Finally, we also consider an online setting and propose a label efficient online algorithm, which predicts parameters for new targets given an arbitrary sequence of mixing coefficients, while enjoying regret guarantees.
引用
收藏
页数:54
相关论文
共 50 条
  • [31] Multi-Source Domain Adaptation for Object Detection
    Yao, Xingxu
    Zhao, Sicheng
    Xu, Pengfei
    Yang, Jufeng
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 3253 - 3262
  • [32] On the analysis of adaptability in multi-source domain adaptation
    Ievgen Redko
    Amaury Habrard
    Marc Sebban
    Machine Learning, 2019, 108 : 1635 - 1652
  • [33] Multi-Source Domain Adaptation with Sinkhorn Barycenter
    Komatsu, Tatsuya
    Matsui, Tomoko
    Gao, Junbin
    29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021), 2021, : 1371 - 1375
  • [34] Graphical Modeling for Multi-Source Domain Adaptation
    Xu, Minghao
    Wang, Hang
    Ni, Bingbing
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1727 - 1741
  • [35] Multi-Source Attention for Unsupervised Domain Adaptation
    Cui, Xia
    Bollegala, Danushka
    1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (AACL-IJCNLP 2020), 2020, : 873 - 883
  • [36] Multi-Source Domain Adaptation with Mixture of Experts
    Guo, Jiang
    Shah, Darsh J.
    Barzilay, Regina
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 4694 - 4703
  • [37] Multi-source Domain Adaptation for Face Recognition
    Yi, Haiyang
    Xu, Zhi
    Wen, Yimin
    Fan, Zhigang
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 1349 - 1354
  • [38] Training-Free Model Merging for Multi-target Domain Adaptation
    Li, Wenyi
    Gao, Huan-ang
    Gao, Mingju
    Tian, Beiwen
    Zhi, Rong
    Zhao, Hao
    COMPUTER VISION - ECCV 2024, PT XLVII, 2025, 15105 : 419 - 438
  • [39] Transformer Based Multi-Source Domain Adaptation
    Wright, Dustin
    Augenstein, Isabelle
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 7963 - 7974
  • [40] Automatic online multi-source domain adaptation
    Renchunzi, Xie
    Pratama, Mahardhika
    INFORMATION SCIENCES, 2022, 582 : 480 - 494