Unsupervised Visual Attention and Invariance for Reinforcement Learning

被引:8
|
作者
Wang, Xudong [1 ]
Lian, Long [1 ]
Yu, Stella X. [1 ]
机构
[1] Univ Calif Berkeley, ICSI, Berkeley, CA 94720 USA
关键词
D O I
10.1109/CVPR46437.2021.00661
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision-based reinforcement learning (RL) is successful, but how to generalize it to unknown test environments remains challenging. Existing methods focus on training an RL policy that is universal to changing visual domains, whereas we focus on extracting visual foreground that is universal, feeding clean invariant vision to the RL policy learner. Our method is completely unsupervised, without manual annotations or access to environment internals. Given videos of actions in a training environment, we learn how to extract foregrounds with unsupervised keypoint detection, followed by unsupervised visual attention to automatically generate a foreground mask per video frame. We can then introduce artificial distractors and train a model to reconstruct the clean foreground mask from noisy observations. Only this learned model is needed during test to provide distraction-free visual input to the RL policy learner. Our Visual Attention and Invariance (VAI) method significantly outperforms the state-of-the-art on visual domain generalization, gaining 15 similar to 49% (61 similar to 229%) more cumulative rewards per episode on DeepMind Control (our Drawer-World Manipulation) benchmarks. Our results demonstrate that it is not only possible to learn domain-invariant vision without any supervision, but freeing RL from visual distractions also makes the policy more focused and thus far better.
引用
收藏
页码:6673 / 6683
页数:11
相关论文
共 50 条
  • [31] Unsupervised Adversarial Network Alignment with Reinforcement Learning
    Zhou, Yang
    Ren, Jiaxiang
    Jin, Ruoming
    Zhang, Zijie
    Zheng, Jingyi
    Jiang, Zhe
    Yan, Da
    Dou, Dejing
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2022, 16 (03)
  • [32] Unsupervised Inverse Reinforcement Learning with Noisy Data
    Surana, Amit
    [J]. 2014 IEEE 53RD ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2014, : 4938 - 4945
  • [33] Unsupervised basis function adaptation for reinforcement learning
    Barker, Edward
    Ras, Charl
    [J]. Journal of Machine Learning Research, 2019, 20
  • [34] Unsupervised Paraphrasing via Deep Reinforcement Learning
    Siddique, A. B.
    Oymak, Samet
    Hristidis, Vagelis
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1800 - 1809
  • [35] CURL: Contrastive Unsupervised Representations for Reinforcement Learning
    Laskin, Michael
    Srinivas, Aravind
    Abbeel, Pieter
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [36] Boosting Reinforcement Learning with Unsupervised Feature Extraction
    Hakenes, Simon
    Glasmachers, Tobias
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: THEORETICAL NEURAL COMPUTATION, PT I, 2019, 11727 : 555 - 566
  • [37] EFFECTS OF SOCIAL-REINFORCEMENT FOR VISUAL-ATTENTION ON CLASSROOM LEARNING BY DISADVANTAGED PRESCHOOLERS
    SERBIN, LA
    GELLER, MI
    GELLER, SE
    [J]. PERCEPTUAL AND MOTOR SKILLS, 1977, 45 (03) : 1339 - 1346
  • [38] Contrastive Intrinsic Control for Unsupervised Reinforcement Learning
    Laskin, Michael
    Liu, Hao
    Peng, Xue Bin
    Yarats, Denis
    Rajeswaran, Aravind
    Abbeel, Pieter
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [39] Unsupervised Basis Function Adaptation for Reinforcement Learning
    Barker, Edward
    Ras, Charl
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20
  • [40] Adversarial Reinforcement Learning for Unsupervised Domain Adaptation
    Zhang, Youshan
    Ye, Hui
    Davison, Brian D.
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 635 - 644