Efficient Group Learning with Hypergraph Partition in Multi-task Learning

被引:0
|
作者
Yao, Quanming [1 ]
Jiang, Xiubao [1 ]
Gong, Mingming [1 ]
You, Xinge [1 ]
Liu, Yu [1 ]
Xu, Duanquan [1 ]
机构
[1] Huazhong Univ Sci & Technol, Dept Elect & Informat Engn, Wuhan 430074, Peoples R China
来源
PATTERN RECOGNITION | 2012年 / 321卷
关键词
multi-task learning; sparse matrix permutation; hypergraph partitioning;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, wide concern has been aroused in multi-task learning (MTL) area, which assumes that affinitive tasks should own similar parameter representation so that joint learning is both appropriate and reciprocal. Researchers also find that imposing similar parameter representation constraint on dissimilar tasks may be harmful to MTL. However, it's difficult to determine which tasks are similar. Z Karig et al [1] proposed to simultaneously learn the groups and parameters to address this problem. But the method is inefficient and cannot scale to large data. In this paper, using the property of the parameter matrix, we describe the group learning process as permuting the parameter matrix into a. block diagonal matrix, which can be modeled as a hypergraph partition problem. The optimization algorithm scales well to large data. Extensive experiments demonstrate that our method is advantageous over existing MTL methods in terms of accuracy and efficiency.
引用
收藏
页码:9 / 16
页数:8
相关论文
共 50 条
  • [1] An efficient active learning method for multi-task learning
    Xiao, Yanshan
    Chang, Zheng
    Liu, Bo
    KNOWLEDGE-BASED SYSTEMS, 2020, 190
  • [2] Latent Group Structured Multi-task Learning
    Niu, Xiangyu
    Sun, Yifan
    Sun, Jinyuan
    2018 CONFERENCE RECORD OF 52ND ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2018, : 850 - 854
  • [3] Efficient Multi-Task Feature Learning with Calibration
    Gong, Pinghua
    Zhou, Jiayu
    Fan, Wei
    Ye, Jieping
    PROCEEDINGS OF THE 20TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'14), 2014, : 761 - 770
  • [4] Multi-Task Networks With Universe, Group, and Task Feature Learning
    Pentyala, Shiva
    Liu, Mengwen
    Dreyer, Markus
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 820 - 830
  • [5] AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning
    Sun, Ximeng
    Panda, Rameswar
    Feris, Rogerio
    Saenko, Kate
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [6] Learning to Branch for Multi-Task Learning
    Guo, Pengsheng
    Lee, Chen-Yu
    Ulbricht, Daniel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [7] Learning to Branch for Multi-Task Learning
    Guo, Pengsheng
    Lee, Chen-Yu
    Ulbricht, Daniel
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [8] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    Memetic Computing, 2020, 12 : 355 - 369
  • [9] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369
  • [10] Attentive multi-task learning for group itinerary recommendation
    Lei Chen
    Jie Cao
    Huanhuan Chen
    Weichao Liang
    Haicheng Tao
    Guixiang Zhu
    Knowledge and Information Systems, 2021, 63 : 1687 - 1716