Attentive Multi-task Deep Reinforcement Learning

被引:3
|
作者
Bram, Timo [1 ]
Brunner, Gino [1 ]
Richter, Oliver [1 ]
Wattenhofer, Roger [1 ]
机构
[1] Swiss Fed Inst Technol, Dept Informat Technol & Elect Engn, Zurich, Switzerland
关键词
D O I
10.1007/978-3-030-46133-1_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task/transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.
引用
收藏
页码:134 / 149
页数:16
相关论文
共 50 条
  • [1] Multi-Task Deep Reinforcement Learning with PopArt
    Hessel, Matteo
    Soyer, Hubert
    Espeholt, Lasse
    Czarnecki, Wojciech
    Schmitt, Simon
    van Hasselt, Hado
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3796 - 3803
  • [2] A Survey of Multi-Task Deep Reinforcement Learning
    Vithayathil Varghese, Nelson
    Mahmoud, Qusay H.
    [J]. ELECTRONICS, 2020, 9 (09) : 1 - 21
  • [3] Optimization of Deep Reinforcement Learning with Hybrid Multi-Task Learning
    Varghese, Nelson Vithayathil
    Mahmoud, Qusay H.
    [J]. 2021 15TH ANNUAL IEEE INTERNATIONAL SYSTEMS CONFERENCE (SYSCON 2021), 2021,
  • [4] Attentive Task Interaction Network for Multi-Task Learning
    Sinodinos, Dimitrios
    Armanfard, Narges
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2885 - 2891
  • [5] Multi-task Deep Reinforcement Learning for Scalable Parallel Task Scheduling
    Zhang, Lingxin
    Qi, Qi
    Wang, Jingyu
    Sun, Haifeng
    Liao, Jianxin
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 2992 - 3001
  • [6] Multi-Task Deep Reinforcement Learning for Continuous Action Control
    Yang, Zhaoyang
    Merrick, Kathryn
    Abbass, Hussein
    Jin, Lianwen
    [J]. PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3301 - 3307
  • [7] Multi-task Deep Reinforcement Learning for IoT Service Selection
    Matsuoka, Hiroki
    Moustafa, Ahmed
    [J]. ICAART: PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 3, 2022, : 548 - 554
  • [8] Multi-task Deep Reinforcement Learning: a Combination of Rainbow and DisTraL
    Andalibi, Milad
    Setoodeh, Peyman
    Mansourieh, Ali
    Asemani, Mohammad Hassan
    [J]. 2020 6TH IRANIAN CONFERENCE ON SIGNAL PROCESSING AND INTELLIGENT SYSTEMS (ICSPIS), 2020,
  • [9] PiCor: Multi-Task Deep Reinforcement Learning with Policy Correction
    Bai, Fengshuo
    Zhang, Hongming
    Tao, Tianyang
    Wu, Zhiheng
    Wang, Yanna
    Xu, Bo
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 6728 - 6736
  • [10] Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
    Oh, Junhyuk
    Singh, Satinder
    Lee, Honglak
    Kohli, Pushmeet
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70