Asymmetric Multi-task Learning Based on Task Relatedness and Loss

被引:0
|
作者
Lee, Giwoong [1 ]
Yang, Eunho [2 ]
Hwang, Sung Ju [1 ]
机构
[1] UNIST, Sch Elect & Comp Engn, Ulsan, South Korea
[2] Korea Adv Inst Sci & Technol, Sch Comp, Daejon, South Korea
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a novel multi-task learning method that minimizes the effect of negative transfer by allowing asymmetric transfer between the tasks based on task relatedness as well as the amount of individual task losses, which we refer to as Asymmetric Multi-task Learning (AMTL). To tackle this problem, we couple multiple tasks via a sparse, directed regularization graph, that enforces each task parameter to be reconstructed as a sparse combination of other tasks selected based on the task-wise loss. We present two different algorithms that jointly learn the task predictors as well as the regularization graph. The first algorithm solves for the original learning objective using alternative optimization, and the second algorithm solves an approximation of it using curriculum learning strategy, that learns one task at a time. We perform experiments on multiple datasets for classification and regression, on which we obtain significant improvements in performance over the single task learning and existing multitask learning models.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Learning Task Relatedness in Multi-Task Learning for Images in Context
    Strezoski, Gjorgji
    van Noord, Nanne
    Worring, Marcel
    [J]. ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, : 78 - 86
  • [2] Adversarial Learning Guided Task Relatedness Refinement for Multi-Task Deep Learning
    Fang, Yuchun
    Cai, Sirui
    Cao, Yiting
    Li, Zhengchen
    Zhang, Zhaoxiang
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6946 - 6957
  • [3] A Simple Approach to Balance Task Loss in Multi-Task Learning
    Liang, Sicong
    Deng, Chang
    Zhang, Yu
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 812 - 823
  • [4] Curriculum-Based Asymmetric Multi-Task Reinforcement Learning
    Huang, Hanchi
    Ye, Deheng
    Shen, Li
    Liu, Wei
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (06) : 7258 - 7269
  • [5] Asymmetric Multi-Task Learning with Local Transference
    Oliveira, Saullo H. G.
    Goncalves, Andre R.
    Von Zuben, Fernando J.
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2022, 16 (05)
  • [6] Deep Asymmetric Multi-task Feature Learning
    Lee, Hae Beom
    Yang, Eunho
    Hwang, Sung Ju
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [7] Task Uncertainty Loss Reduce Negative Transfer in Asymmetric Multi-task Feature Learning (Student Abstract)
    da Silva, Rafael Peres
    Suphavilai, Chayaporn
    Nagarajan, Niranjan
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 15867 - 15868
  • [8] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    [J]. Memetic Computing, 2020, 12 : 355 - 369
  • [9] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    [J]. MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369
  • [10] Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods
    Zhang, Zhihan
    Yu, Wenhao
    Yu, Mengxia
    Guo, Zhichun
    Jiang, Meng
    [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 943 - 956