Sparse Multi-Task Reinforcement Learning

被引:0
|
作者
Calandriello, Daniele [1 ]
Lazaric, Alessandro [1 ]
Restelli, Marcello [2 ]
机构
[1] INRIA Lille Nord Europe, Team SequeL, Lille, France
[2] Politecn Milan, DEIB, Milan, Italy
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In multi-task reinforcement learning (MTRL), the objective is to simultaneously learn multiple tasks and exploit their similarity to improve the performance w.r.t. single-task learning. In this paper we investigate the case when all the tasks can be accurately represented in a linear approximation space using the same small subset of the original (large) set of features. This is equivalent to assuming that the weight vectors of the task value functions are jointly sparse, i.e., the set of their non-zero components is small and it is shared across tasks. Building on existing results in multi-task regression, we develop two multi-task extensions of the fitted Q-iteration algorithm. While the first algorithm assumes that the tasks are jointly sparse in the given representation, the second one learns a transformation of the features in the attempt of finding a more sparse representation. For both algorithms we provide a sample complexity analysis and numerical simulations.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Sparse multi-task reinforcement learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    [J]. INTELLIGENZA ARTIFICIALE, 2015, 9 (01) : 5 - 20
  • [2] Learning Sparse Task Relations in Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2914 - 2920
  • [3] Multi-task reinforcement learning in humans
    Momchil S. Tomov
    Eric Schulz
    Samuel J. Gershman
    [J]. Nature Human Behaviour, 2021, 5 : 764 - 773
  • [4] Multi-task reinforcement learning in humans
    Tomov, Momchil S.
    Schulz, Eric
    Gershman, Samuel J.
    [J]. NATURE HUMAN BEHAVIOUR, 2021, 5 (06) : 764 - +
  • [5] Multi-task Learning with Modular Reinforcement Learning
    Xue, Jianyong
    Alexandre, Frederic
    [J]. FROM ANIMALS TO ANIMATS 16, 2022, 13499 : 127 - 138
  • [6] Unsupervised Task Clustering for Multi-task Reinforcement Learning
    Ackermann, Johannes
    Richter, Oliver
    Wattenhofer, Roger
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, 2021, 12975 : 222 - 237
  • [7] Multi-task Batch Reinforcement Learning with Metric Learning
    Li, Jiachen
    Quan Vuong
    Liu, Shuang
    Liu, Minghua
    Ciosek, Kamil
    Christensen, Henrik
    Su, Hao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [8] Attentive Multi-task Deep Reinforcement Learning
    Bram, Timo
    Brunner, Gino
    Richter, Oliver
    Wattenhofer, Roger
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT III, 2020, 11908 : 134 - 149
  • [9] A Survey of Multi-Task Deep Reinforcement Learning
    Vithayathil Varghese, Nelson
    Mahmoud, Qusay H.
    [J]. ELECTRONICS, 2020, 9 (09) : 1 - 21
  • [10] Multi-Task Deep Reinforcement Learning with PopArt
    Hessel, Matteo
    Soyer, Hubert
    Espeholt, Lasse
    Czarnecki, Wojciech
    Schmitt, Simon
    van Hasselt, Hado
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3796 - 3803