The aLS-SVM based multi-task learning classifiers

被引:0
|
作者
Liyun Lu
Qiang Lin
Huimin Pei
Ping Zhong
机构
[1] China Agricultural University,College of Science
来源
Applied Intelligence | 2018年 / 48卷
关键词
Multi-task learning; Support vector machine; Asymmetric least squared loss;
D O I
暂无
中图分类号
学科分类号
摘要
The multi-task learning support vector machines (SVMs) have recently attracted considerable attention since the conventional single task learning ones usually ignore the relatedness among multiple related tasks and train them separately. Different from the single task learning, the multi-task learning methods can capture the correlation among tasks and achieve an improved performance by training all tasks simultaneously. In this paper, we make two assumptions on the relatedness among tasks. One is that the normal vectors of the related tasks share a certain common parameter value; the other is that the models of the related tasks are close enough and share a common model. Under these assumptions, we propose two multi-task learning methods, named as MTL-aLS-SVM I and MTL-aLS-SVM II respectively, for binary classification by taking full advantages of multi-task learning and the asymmetric least squared loss. MTL-aLS-SVM I seeks for a trade-off between the maximal expectile distance for each task model and the closeness of each task model to the averaged model. MTL-aLS-SVM II can use different kernel functions for different tasks, and it is an extension of the MTL-aLS-SVM I. Both of them can be easily implemented by solving quadratic programming. In addition, we develop their special cases which include L2-SVM based multi-task learning methods (MTL-L2-SVM I and MTL-L2-SVM II) and the least squares SVM (LS-SVM) based multi-task learning methods (MTL-LS-SVM I and MTL-LS-SVM II). Although the MTL-L2-SVM II and MTL-LS-SVM II appear in the form of special cases, they are firstly proposed in this paper. The experimental results show that the proposed methods are very encouraging.
引用
收藏
页码:2393 / 2407
页数:14
相关论文
共 50 条
  • [21] Learning to Branch for Multi-Task Learning
    Guo, Pengsheng
    Lee, Chen-Yu
    Ulbricht, Daniel
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [22] Service recommendation based on contrastive learning and multi-task learning
    Yu, Ting
    Zhang, Lihua
    Liu, Hailin
    Liu, Hongbing
    Wang, Jiaojiao
    [J]. COMPUTER COMMUNICATIONS, 2024, 213 : 285 - 295
  • [23] RESEARCH OF MULTI-TASK LEARNING BASED ON EXTREME LEARNING MACHINE
    Mao, Wentao
    Xu, Jiucheng
    Zhao, Shengjie
    Tian, Mei
    [J]. INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS, 2013, 21 : 75 - 85
  • [24] Learning to Branch for Multi-Task Learning
    Guo, Pengsheng
    Lee, Chen-Yu
    Ulbricht, Daniel
    [J]. 25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [25] Boosted multi-task learning
    Olivier Chapelle
    Pannagadatta Shivaswamy
    Srinivas Vadrevu
    Kilian Weinberger
    Ya Zhang
    Belle Tseng
    [J]. Machine Learning, 2011, 85 : 149 - 173
  • [26] An overview of multi-task learning
    Zhang, Yu
    Yang, Qiang
    [J]. NATIONAL SCIENCE REVIEW, 2018, 5 (01) : 30 - 43
  • [27] On Partial Multi-Task Learning
    He, Yi
    Wu, Baijun
    Wu, Di
    Wu, Xindong
    [J]. ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1174 - 1181
  • [28] Asynchronous Multi-Task Learning
    Baytas, Inci M.
    Yan, Ming
    Jain, Anil K.
    Zhou, Jiayu
    [J]. 2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 11 - 20
  • [29] Pareto Multi-Task Learning
    Lin, Xi
    Zhen, Hui-Ling
    Li, Zhenhua
    Zhang, Qingfu
    Kwong, Sam
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [30] Federated Multi-Task Learning
    Smith, Virginia
    Chiang, Chao-Kai
    Sanjabi, Maziar
    Talwalkar, Ameet
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30