Multi-task gradient descent for multi-task learning

被引:1
|
作者
Lu Bai
Yew-Soon Ong
Tiantian He
Abhishek Gupta
机构
[1] Nanyang Technological University,School of Computer Science and Engineering
[2] Agency for Science,Singapore Institute of Manufacturing Technology
[3] Technology and Research,undefined
来源
Memetic Computing | 2020年 / 12卷
关键词
Multi-task gradient descent; Knowledge transfer; Multi-task learning; Multi-label learning;
D O I
暂无
中图分类号
学科分类号
摘要
Multi-Task Learning (MTL) aims to simultaneously solve a group of related learning tasks by leveraging the salutary knowledge memes contained in the multiple tasks to improve the generalization performance. Many prevalent approaches focus on designing a sophisticated cost function, which integrates all the learning tasks and explores the task-task relationship in a predefined manner. Different from previous approaches, in this paper, we propose a novel Multi-task Gradient Descent (MGD) framework, which improves the generalization performance of multiple tasks through knowledge transfer. The uniqueness of MGD lies in assuming individual task-specific learning objectives at the start, but with the cost functions implicitly changing during the course of parameter optimization based on task-task relationships. Specifically, MGD optimizes the individual cost function of each task using a reformative gradient descent iteration, where relations to other tasks are facilitated through effectively transferring parameter values (serving as the computational representations of memes) from other tasks. Theoretical analysis shows that the proposed framework is convergent under any appropriate transfer mechanism. Compared with existing MTL approaches, MGD provides a novel easy-to-implement framework for MTL, which can mitigate negative transfer in the learning procedure by asymmetric transfer. The proposed MGD has been compared with both classical and state-of-the-art approaches on multiple MTL datasets. The competitive experimental results validate the effectiveness of the proposed algorithm.
引用
收藏
页码:355 / 369
页数:14
相关论文
共 50 条
  • [1] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    [J]. MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369
  • [2] Conflict-Averse Gradient Descent for Multi-task Learning
    Liu, Bo
    Liu, Xingchao
    Jin, Xiaojie
    Stone, Peter
    Liu, Qiang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] Gradient Surgery for Multi-Task Learning
    Yu, Tianhe
    Kumar, Saurabh
    Gupta, Abhishek
    Levine, Sergey
    Hausman, Karol
    Finn, Chelsea
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [4] GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning
    Dong, Xin
    Wu, Ruize
    Xiong, Chao
    Li, Hai
    Cheng, Lei
    He, Yong
    Qian, Shiyou
    Cao, Jian
    Mo, Linjian
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 386 - 395
  • [5] MULTI-TASK DISTILLATION: TOWARDS MITIGATING THE NEGATIVE TRANSFER IN MULTI-TASK LEARNING
    Meng, Ze
    Yao, Xin
    Sun, Lifeng
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 389 - 393
  • [6] An overview of multi-task learning
    Zhang, Yu
    Yang, Qiang
    [J]. NATIONAL SCIENCE REVIEW, 2018, 5 (01) : 30 - 43
  • [7] Boosted multi-task learning
    Olivier Chapelle
    Pannagadatta Shivaswamy
    Srinivas Vadrevu
    Kilian Weinberger
    Ya Zhang
    Belle Tseng
    [J]. Machine Learning, 2011, 85 : 149 - 173
  • [8] On Partial Multi-Task Learning
    He, Yi
    Wu, Baijun
    Wu, Di
    Wu, Xindong
    [J]. ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1174 - 1181
  • [9] Federated Multi-Task Learning
    Smith, Virginia
    Chiang, Chao-Kai
    Sanjabi, Maziar
    Talwalkar, Ameet
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [10] Pareto Multi-Task Learning
    Lin, Xi
    Zhen, Hui-Ling
    Li, Zhenhua
    Zhang, Qingfu
    Kwong, Sam
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32