TRACE NORM REGULARIZATION: REFORMULATIONS, ALGORITHMS, AND MULTI-TASK LEARNING

被引:114
|
作者
Pong, Ting Kei [1 ]
Tseng, Paul [1 ]
Ji, Shuiwang [2 ]
Ye, Jieping [2 ]
机构
[1] Univ Washington, Dept Math, Seattle, WA 98195 USA
[2] Arizona State Univ, Dept Comp Sci & Engn, Ctr Evolutionary Funct Genom, Biodesign Inst, Tempe, AZ 85287 USA
关键词
multi-task learning; gene expression pattern analysis; trace norm regularization; convex optimization; duality; semidefinite programming; proximal gradient method; THRESHOLDING ALGORITHM; RANK MINIMIZATION; MULTIPLE TASKS;
D O I
10.1137/090763184
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We consider a recently proposed optimization formulation of multi-task learning based on trace norm regularized least squares. While this problem may be formulated as a semidefinite program (SDP), its size is beyond general SDP solvers. Previous solution approaches apply proximal gradient methods to solve the primal problem. We derive new primal and dual reformulations of this problem, including a reduced dual formulation that involves minimizing a convex quadratic function over an operator-norm ball in matrix space. This reduced dual problem may be solved by gradient-projection methods, with each projection involving a singular value decomposition. The dual approach is compared with existing approaches and its practical effectiveness is illustrated on simulations and an application to gene expression pattern analysis.
引用
收藏
页码:3465 / 3489
页数:25
相关论文
共 50 条
  • [31] On Partial Multi-Task Learning
    He, Yi
    Wu, Baijun
    Wu, Di
    Wu, Xindong
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1174 - 1181
  • [32] Pareto Multi-Task Learning
    Lin, Xi
    Zhen, Hui-Ling
    Li, Zhenhua
    Zhang, Qingfu
    Kwong, Sam
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [33] Federated Multi-Task Learning
    Smith, Virginia
    Chiang, Chao-Kai
    Sanjabi, Maziar
    Talwalkar, Ameet
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [34] Asynchronous Multi-Task Learning
    Baytas, Inci M.
    Yan, Ming
    Jain, Anil K.
    Zhou, Jiayu
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 11 - 20
  • [35] Calibrated Multi-Task Learning
    Nie, Feiping
    Hu, Zhanxuan
    Li, Xuelong
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2012 - 2021
  • [36] Efficient healthcare supply chain: A prioritized multi-task learning approach with task-specific regularization
    Kar, Soumyadipta
    Mohanty, Manas Kumar
    Thakurta, Parag Kumar Guha
    Engineering Applications of Artificial Intelligence, 2024, 133
  • [37] An overview of multi-task learning
    Yu Zhang
    Qiang Yang
    NationalScienceReview, 2018, 5 (01) : 30 - 43
  • [38] Boosted multi-task learning
    Chapelle, Olivier
    Shivaswamy, Pannagadatta
    Vadrevu, Srinivas
    Weinberger, Kilian
    Zhang, Ya
    Tseng, Belle
    MACHINE LEARNING, 2011, 85 (1-2) : 149 - 173
  • [39] Distributed Multi-Task Learning
    Wang, Jialei
    Kolar, Mladen
    Srebro, Nathan
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 751 - 760
  • [40] Parallel Multi-Task Learning
    Zhang, Yu
    2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 629 - 638