Multi-task Learning by Pareto Optimality

被引:2
|
作者
Dyankov, Deyan [1 ]
Riccio, Salvatore Danilo [2 ,3 ]
Di Fatta, Giuseppe [1 ]
Nicosia, Giuseppe [2 ]
机构
[1] Univ Reading, Reading, Berks, England
[2] Univ Cambridge, Cambridge, England
[3] Queen Mary Univ London, London, England
关键词
Multitask learning; Neural and evolutionary computing; Deep neuroevolution; Hypervolume; Kullback-Leibler Divergence; Evolution Strategy; Deep artificial neural networks; Atari; 2600; Games; ALGORITHM;
D O I
10.1007/978-3-030-37599-7_50
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) are often criticized because they lack the ability to learn more than one task at a time: Multitask Learning is an emerging research area whose aim is to overcome this issue. In this work, we introduce the Pareto Multitask Learning framework as a tool that can show how effectively a DNN is learning a shared representation common to a set of tasks. We also experimentally show that it is possible to extend the optimization process so that a single DNN simultaneously learns how to master two or more Atari games: using a single weight parameter vector, our network is able to obtain sub-optimal results for up to four games.
引用
收藏
页码:605 / 618
页数:14
相关论文
共 50 条
  • [1] Pareto Multi-Task Learning
    Lin, Xi
    Zhen, Hui-Ling
    Li, Zhenhua
    Zhang, Qingfu
    Kwong, Sam
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [2] Pareto Multi-task Deep Learning
    Riccio, Salvatore D.
    Dyankov, Deyan
    Jansen, Giorgio
    Di Fatta, Giuseppe
    Nicosia, Giuseppe
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 132 - 141
  • [3] A Multi-objective / Multi-task Learning Framework Induced by Pareto Stationarity
    Momma, Michinari
    Dong, Chaosheng
    Liu, Jia
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [4] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    [J]. MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369
  • [5] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    [J]. Memetic Computing, 2020, 12 : 355 - 369
  • [6] Learning to Branch for Multi-Task Learning
    Guo, Pengsheng
    Lee, Chen-Yu
    Ulbricht, Daniel
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [7] Learning to Branch for Multi-Task Learning
    Guo, Pengsheng
    Lee, Chen-Yu
    Ulbricht, Daniel
    [J]. 25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [8] An overview of multi-task learning
    Zhang, Yu
    Yang, Qiang
    [J]. NATIONAL SCIENCE REVIEW, 2018, 5 (01) : 30 - 43
  • [9] Boosted multi-task learning
    Olivier Chapelle
    Pannagadatta Shivaswamy
    Srinivas Vadrevu
    Kilian Weinberger
    Ya Zhang
    Belle Tseng
    [J]. Machine Learning, 2011, 85 : 149 - 173
  • [10] On Partial Multi-Task Learning
    He, Yi
    Wu, Baijun
    Wu, Di
    Wu, Xindong
    [J]. ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1174 - 1181