A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning

被引:0
|
作者
Tong, Xinyi [1 ]
Xu, Xiangxiang [2 ]
Huang, Shao-Lun [1 ]
Zheng, Lizhong [2 ]
机构
[1] Tsinghua Univ, Tsinghua Berkeley Shenzhen Inst, Beijing, Peoples R China
[2] MIT, Cambridge, MA 02139 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current transfer learning algorithm designs mainly focus on the similarities between source and target tasks, while the impacts of the sample sizes of these tasks are often not sufficiently addressed. This paper proposes a mathematical framework for quantifying the transferability in multi-source transfer learning problems, with both the task similarities and the sample complexity of learning models taken into account. In particular, we consider the setup where the models learned from different tasks are linearly combined for learning the target task, and use the optimal combining coefficients to measure the transferability. Then, we demonstrate the analytical expression of this transferability measure, characterized by the sample sizes, model complexity, and the similarities between source and target tasks, which provides fundamental insights of the knowledge transferring mechanism and the guidance for algorithm designs. Furthermore, we apply our analyses for practical learning tasks, and establish a quantifiable transferability measure by exploiting a parameterized model. In addition, we develop an alternating iterative algorithm to implement our theoretical results for training deep neural networks in multi-source transfer learning tasks. Finally, experiments on image classification tasks show that our approach outperforms existing transfer learning algorithms in multi-source and few-shot scenarios.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] A Representation Learning Framework for Multi-Source Transfer Parsing
    Guo, Jiang
    Che, Wanxiang
    Yarowsky, David
    Wang, Haifeng
    Liu, Ting
    [J]. THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 2734 - 2740
  • [2] Learning for amalgamation: A multi-source transfer learning framework for sentiment classification
    Nguyen, Cuong, V
    Le, Khiem H.
    Tran, Anh M.
    Pham, Quang H.
    Nguyen, Binh T.
    [J]. INFORMATION SCIENCES, 2022, 590 : 1 - 14
  • [3] Multi-source Transfer Learning Based on the Power Set Framework
    Song, Bingbing
    Pan, Jianhan
    Qu, Qiaoli
    Li, Zexin
    [J]. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2023, 16 (01)
  • [4] Multi-source Transfer Learning Based on the Power Set Framework
    Bingbing Song
    Jianhan Pan
    Qiaoli Qu
    Zexin Li
    [J]. International Journal of Computational Intelligence Systems, 16
  • [5] Multi-source Transfer Learning for Deep Reinforcement Learning
    Garcia-Ramirez, Jesus
    Morales, Eduardo
    Escalante, Hugo Jair
    [J]. PATTERN RECOGNITION (MCPR 2021), 2021, 12725 : 131 - 140
  • [6] Bayesian neural multi-source transfer learning
    Chandra, Rohitash
    Kapoor, Arpit
    [J]. NEUROCOMPUTING, 2020, 378 : 54 - 64
  • [7] A Reinforcement Learning Framework for Multi-source Adaptive Streaming
    Nguyen, Nghia T.
    Vo, Phuong L.
    Nguyen, Thi Thanh Sang
    Le, Quan M.
    Do, Cuong T.
    Nguyen, Ngoc-Thanh
    [J]. COMPUTATIONAL COLLECTIVE INTELLIGENCE (ICCCI 2021), 2021, 12876 : 416 - 426
  • [8] Multi-source Transfer Learning with Multi-view Adaboost
    Xu, Zhijie
    Sun, Shiliang
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2012, PT III, 2012, 7665 : 332 - 339
  • [9] Multi-Source Tri-Training Transfer Learning
    Cheng, Yuhu
    Wang, Xuesong
    Cao, Ge
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2014, E97D (06): : 1668 - 1672
  • [10] AMTLDC: a new adversarial multi-source transfer learning framework to diagnosis of COVID-19
    Hadi Alhares
    Jafar Tanha
    Mohammad Ali Balafar
    [J]. Evolving Systems, 2023, 14 : 1101 - 1115