Self-Supervised Attention-Aware Reinforcement Learning

被引:0
|
作者
Wu, Haiping [1 ,2 ]
Khetarpa, Khimya [1 ,2 ]
Precup, Doina [1 ,2 ,3 ]
机构
[1] McGill Univ, Montreal, PQ, Canada
[2] Mila, Montreal, PQ, Canada
[3] Google DeepMind, Montreal, PQ, Canada
关键词
PREDICT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visual saliency has emerged as a major visualization tool for interpreting deep reinforcement learning (RL) agents. However, much of the existing research uses it as an analyzing tool rather than an inductive bias for policy learning. In this work, we use visual attention as an inductive bias for RL agents. We propose a novel self-supervised attention learning approach which can 1. learn to select regions of interest without explicit annotations, and 2. act as a plug for existing deep RL methods to improve the learning performance. We empirically show that the self-supervised attention-aware deep RL methods outperform the baselines in the context of both the rate of convergence and performance. Furthermore, the proposed self-supervised attention is not tied with specific policies, nor restricted to a specific scene. We posit that the proposed approach is a general self-supervised attention module for multi-task learning and transfer learning, and empirically validate the generalization ability of the proposed method. Finally, we show that our method learns meaningful object keypoints highlighting improvements both qualitatively and quantitatively.
引用
收藏
页码:10311 / 10319
页数:9
相关论文
共 50 条
  • [1] Reinforcement Learning with Attention that Works: A Self-Supervised Approach
    Manchin, Anthony
    Abbasnejad, Ehsan
    van den Hengel, Anton
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2019, PT V, 2019, 1143 : 223 - 230
  • [2] Self-supervised monocular depth estimation via two mechanisms of attention-aware cost volume
    Zhongcheng Hong
    Qiuxia Wu
    [J]. The Visual Computer, 2023, 39 : 5937 - 5951
  • [3] Self-supervised monocular depth estimation via two mechanisms of attention-aware cost volume
    Hong, Zhongcheng
    Wu, Qiuxia
    [J]. VISUAL COMPUTER, 2023, 39 (11): : 5937 - 5951
  • [4] Attention-aware Deep Reinforcement Learning for Video Face Recognition
    Rao, Yongming
    Lu, Jiwen
    Zhou, Jie
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 3951 - 3960
  • [5] Attention-Aware Face Hallucination via Deep Reinforcement Learning
    Cao, Qingxing
    Lin, Liang
    Shi, Yukai
    Liang, Xiaodan
    Li, Guanbin
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1656 - 1664
  • [6] There Is No Turning Back: A Self-Supervised Approach for Reversibility-Aware Reinforcement Learning
    Grinsztajn, Nathan
    Ferret, Johan
    Pietquin, Olivier
    Preux, Philippe
    Geist, Matthieu
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [7] Self-supervised pre-training for joint optic disc and cup segmentation via attention-aware network
    Zhiwang Zhou
    Yuanchang Zheng
    Xiaoyu Zhou
    Jie Yu
    Shangjie Rong
    [J]. BMC Ophthalmology, 24
  • [8] Self-supervised pre-training for joint optic disc and cup segmentation via attention-aware network
    Zhou, Zhiwang
    Zheng, Yuanchang
    Zhou, Xiaoyu
    Yu, Jie
    Rong, Shangjie
    [J]. BMC OPHTHALMOLOGY, 2024, 24 (01)
  • [9] Attention-Aware Sampling via Deep Reinforcement Learning for Action Recognition
    Dong, Wenkai
    Zhang, Zhaoxiang
    Tan, Tieniu
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 8247 - 8254
  • [10] Intrinsically Motivated Self-supervised Learning in Reinforcement Learning
    Zhao, Yue
    Du, Chenzhuang
    Zhao, Hang
    Li, Tiejun
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 3605 - 3615