Focused multi-task learning in a Gaussian process framework

被引:17
|
作者
Leen, Gayle [1 ]
Peltonen, Jaakko [1 ]
Kaski, Samuel [1 ,2 ]
机构
[1] Aalto Univ, Helsinki Inst Informat Technol HIIT, Dept Informat & Comp Sci, Aalto 00076, Finland
[2] Univ Helsinki, Dept Comp Sci, Helsinki Inst Informat Technol HIIT, SF-00510 Helsinki, Finland
基金
芬兰科学院;
关键词
Gaussian processes; Multi-task learning; Transfer learning; Negative transfer;
D O I
10.1007/s10994-012-5302-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-task learning, learning of a set of tasks together, can improve performance in the individual learning tasks. Gaussian process models have been applied to learning a set of tasks on different data sets, by constructing joint priors for functions underlying the tasks. In these previous Gaussian process models, the setting has been symmetric in the sense that all the tasks have been assumed to be equally important, whereas in settings such as transfer learning the goal is asymmetric, to enhance performance in a target task given the other tasks. We propose a focused Gaussian process model which introduces an "explaining away" model for each of the additional tasks to model their non-related variation, in order to focus the transfer to the task-of-interest. This focusing helps reduce the key problem of negative transfer, which may cause performance to even decrease if the tasks are not related closely enough. In experiments, our model improves performance compared to single-task learning, symmetric multi-task learning using hierarchical Dirichlet processes, transfer learning based on predictive structure learning, and symmetric multi-task learning with Gaussian processes.
引用
收藏
页码:157 / 182
页数:26
相关论文
共 50 条
  • [31] A Deep Multi-Task Learning Framework for Brain Tumor Segmentation
    Huang, He
    Yang, Guang
    Zhang, Wenbo
    Xu, Xiaomei
    Yang, Weiji
    Jiang, Weiwei
    Lai, Xiaobo
    [J]. FRONTIERS IN ONCOLOGY, 2021, 11
  • [32] Chinese Dialogue Analysis Using Multi-Task Learning Framework
    Zhang, Xuejing
    Lv, Xueqiang
    Zhou, Qiang
    [J]. 2018 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2018, : 102 - 107
  • [33] DMTMV: A Unified Learning Framework for Deep Multi-Task Multi-View Learning
    Wu, Yi-Feng
    Zhan, De-Chuan
    Jiang, Yuan
    [J]. 2018 9TH IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (ICBK), 2018, : 49 - 56
  • [34] Probabilistic movement primitives based multi-task learning framework
    Yue, Chengfei
    Gao, Tian
    Lu, Lang
    Lin, Tao
    Wu, Yunhua
    [J]. COMPUTERS & INDUSTRIAL ENGINEERING, 2024, 191
  • [35] Encoding Tree Sparsity in Multi-Task Learning: A Probabilistic Framework
    Han, Lei
    Zhang, Yu
    Song, Guojie
    Xie, Kunqing
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 1854 - 1860
  • [36] Iterative framework based on multi-task learning for service recommendation
    Yu, Ting
    Yu, Dongjin
    Wang, Dongjing
    Yang, Quanxin
    Hu, Xueyou
    [J]. Journal of Systems and Software, 2024, 207
  • [37] A JOINT MULTI-TASK LEARNING FRAMEWORK FOR SPOKEN LANGUAGE UNDERSTANDING
    Li, Changliang
    Kong, Cunliang
    Zhao, Yan
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 6054 - 6058
  • [38] Iterative framework based on multi-task learning for service recommendation
    Yu, Ting
    Yu, Dongjin
    Wang, Dongjing
    Yang, Quanxin
    Hu, Xueyou
    [J]. JOURNAL OF SYSTEMS AND SOFTWARE, 2024, 207
  • [39] Task Aware Feature Extraction Framework for Sequential Dependence Multi-Task Learning
    Tao, Xuewen
    Ha, Mingming
    Guo, Xiaobo
    Ma, Qiongxu
    Cheng, Hongwei
    Lin, Wenfang
    Cheng, Linxun
    Han, Bing
    [J]. PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 151 - 160
  • [40] A Unified Multi-task Adversarial Learning Framework for Pharmacovigilance Mining
    Yadav, Shweta
    Ekbal, Asif
    Saha, Sriparna
    Bhattacharyya, Pushpak
    [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 5234 - 5245