Task-Aware Dynamic Model Optimization for Multi-Task Learning

被引:0
|
作者
Choi, Sujin [1 ]
Jin, Hyundong [2 ]
Kim, Eunwoo [1 ,2 ]
机构
[1] Chung Ang Univ, Dept Artificial Intelligence, Seoul 06974, South Korea
[2] Chung Ang Univ, Sch Comp Sci & Engn, Seoul 06974, South Korea
关键词
Multi-task learning; resource-efficient learning; model optimization;
D O I
10.1109/ACCESS.2023.3339793
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-task learning (MTL) is a field in which a deep neural network simultaneously learns knowledge from multiple tasks. However, achieving resource-efficient MTL remains challenging due to entangled network parameters across tasks and varying task-specific complexity. Existing methods employ network compression techniques while maintaining comparable performance, but they often compress uniformly across all tasks without considering individual complexity. This can lead to suboptimal solutions due to entangled network parameters and memory inefficiency, as the parameters for each task may be insufficient or excessive. To address these challenges, we propose a framework called Dynamic Model Optimization (DMO) that dynamically allocates network parameters to groups based on task-specific complexity. This framework consists of three key steps: measuring task similarity and task difficulty, grouping tasks, and allocating parameters. This process involves the calculation of both weight and loss similarities across tasks and employs sample-wise loss as a measure of task difficulty. Tasks are grouped based on their similarities, and parameters are allocated with dynamic pruning according to task difficulty within their respective groups. We apply the proposed framework to MTL with various classification datasets. Experimental results demonstrate that the proposed approach achieves high performance while taking fewer network parameters than other MTL methods.
引用
收藏
页码:137709 / 137717
页数:9
相关论文
共 50 条
  • [41] Dynamic Multi-Task Learning with Convolutional Neural Network
    Fang, Yuchun
    Ma, Zhengyan
    Zhang, Zhaoxiang
    Zhang, Xu-Yao
    Bai, Xiang
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1668 - 1674
  • [42] Learning Multi-Level Task Groups in Multi-Task Learning
    Han, Lei
    Zhang, Yu
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 2638 - 2644
  • [43] Multi-Task Particle Swarm Optimization With Dynamic Neighbor and Level-Based Inter-Task Learning
    Tang, Zedong
    Gong, Maoguo
    Xie, Yu
    Li, Hao
    Qin, A. K.
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (02): : 300 - 314
  • [44] TAML: A Traffic-aware Multi-task Learning Model for Estimating Travel Time
    Xu, Jiajie
    Xu, Saijun
    Zhou, Rui
    Liu, Chengfei
    Liu, An
    Zhao, Lei
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2021, 12 (06)
  • [45] HammerDrive: A Task-Aware Driving Visual Attention Model
    Amadori, Pierluigi Vito
    Fischer, Tobias
    Demiris, Yiannis
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (06) : 5573 - 5585
  • [46] Boosted multi-task learning
    Olivier Chapelle
    Pannagadatta Shivaswamy
    Srinivas Vadrevu
    Kilian Weinberger
    Ya Zhang
    Belle Tseng
    Machine Learning, 2011, 85 : 149 - 173
  • [47] An overview of multi-task learning
    Zhang, Yu
    Yang, Qiang
    NATIONAL SCIENCE REVIEW, 2018, 5 (01) : 30 - 43
  • [48] On Partial Multi-Task Learning
    He, Yi
    Wu, Baijun
    Wu, Di
    Wu, Xindong
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1174 - 1181
  • [49] Calibrated Multi-Task Learning
    Nie, Feiping
    Hu, Zhanxuan
    Li, Xuelong
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2012 - 2021
  • [50] Federated Multi-Task Learning
    Smith, Virginia
    Chiang, Chao-Kai
    Sanjabi, Maziar
    Talwalkar, Ameet
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30