Learning rates for multi-task regularization networks

被引:6
|
作者
Gui, Jie [1 ]
Zhang, Haizhang [2 ,3 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
[2] Sun Yat Sen Univ, Sch Math Zhuhai, Zhuhai 519082, Peoples R China
[3] Sun Yat Sen Univ, Guangdong Prov Key Lab Computat Sci, Zhuhai 519082, Peoples R China
基金
中国国家自然科学基金;
关键词
Vector-valued reproducing kernel Hilbert; spaces; Multi-task learning; Matrix-valued reproducing kernels; Learning rates; Regularization networks; KERNEL HILBERT-SPACES; MERCER THEOREM; ERROR ANALYSIS; BANACH-SPACES; VECTOR;
D O I
10.1016/j.neucom.2021.09.031
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-task learning is an important trend of machine learning in facing the era of artificial intelligence and big data. Despite a large amount of researches on learning rate estimates of various single-task machine learning algorithms, there is little parallel work for multi-task learning. We present mathemat-ical analysis on the learning rate estimate of multi-task learning based on the theory of vector-valued reproducing kernel Hilbert spaces and matrix-valued reproducing kernels. For the typical multi-task reg-ularization networks, an explicit learning rate dependent both on the number of sample data and the number of tasks is obtained. It reveals that the generalization ability of multi-task learning algorithms is indeed affected as the number of tasks increases. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:243 / 251
页数:9
相关论文
共 50 条
  • [1] Bayesian online multi-task learning using regularization networks
    Pillonetto, Gianluigi
    Dinuzzo, Francesco
    De Nicolao, Giuseppe
    [J]. 2008 AMERICAN CONTROL CONFERENCE, VOLS 1-12, 2008, : 4517 - +
  • [2] Multi-Task Learning with Capsule Networks
    Lei, Kai
    Fu, Qiuai
    Liang, Yuzhi
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [3] Optimistic Rates for Multi-Task Representation Learning
    Watkins, Austin
    Ullah, Enayat
    Thanh Nguyen-Tang
    Arora, Raman
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] Multi-task feature learning by using trace norm regularization
    Zhang Jiangmei
    Yu Binfeng
    Ji Haibo
    Wang, Kunpeng
    [J]. OPEN PHYSICS, 2017, 15 (01): : 674 - 681
  • [5] Adaptive dual graph regularization for clustered multi-task learning
    Liu, Cheng
    Li, Rui
    Chen, Sentao
    Zheng, Lin
    Jiang, Dazhi
    [J]. NEUROCOMPUTING, 2024, 574
  • [6] TRACE NORM REGULARIZATION: REFORMULATIONS, ALGORITHMS, AND MULTI-TASK LEARNING
    Pong, Ting Kei
    Tseng, Paul
    Ji, Shuiwang
    Ye, Jieping
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2010, 20 (06) : 3465 - 3489
  • [7] Improved Bounds for Multi-task Learning with Trace Norm Regularization
    Liu, Weiwei
    [J]. THIRTY SIXTH ANNUAL CONFERENCE ON LEARNING THEORY, VOL 195, 2023, 195 : 700 - 714
  • [8] Convex Multi-Task Learning with Neural Networks
    Ruiz, Carlos
    Alaiz, Carlos M.
    Dorronsoro, Jose R.
    [J]. HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, HAIS 2022, 2022, 13469 : 223 - 235
  • [9] Multi-task Sparse Gaussian Processes with Improved Multi-task Sparsity Regularization
    Zhu, Jiang
    Sun, Shiliang
    [J]. PATTERN RECOGNITION (CCPR 2014), PT I, 2014, 483 : 54 - 62
  • [10] Multi-Task Networks With Universe, Group, and Task Feature Learning
    Pentyala, Shiva
    Liu, Mengwen
    Dreyer, Markus
    [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 820 - 830