Enhancing visual reinforcement learning with State-Action Representation

被引:0
|
作者
Yan, Mengbei [1 ]
Lyu, Jiafei [1 ]
Li, Xiu [1 ]
机构
[1] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Lishui Rd, Shenzhen 518055, Peoples R China
关键词
Visual reinforcement learning; State-action representation; Sample efficiency;
D O I
10.1016/j.knosys.2024.112487
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the remarkable progress made in visual reinforcement learning (RL) in recent years, sample inefficiency remains a major challenge. Many existing approaches attempt to address this by extracting better representations from raw images using techniques like data augmentation or introducing some auxiliary tasks. However, these methods overlook the environmental dynamic information embedded in the collected transitions, which can be crucial for efficient control. In this paper, we present STAR: State-Action Action Representation Learning, a simple yet effective approach for visual continuous control. STAR learns a joint state-action representation by modeling the dynamics of the environment in the latent space. By incorporating the learned joint state- action representation into the critic, STAR enhances the value estimation with latent dynamics information. We theoretically show that the value function can still converge to the optima when involving additional representation inputs. On various challenging visual continuous control tasks from DeepMind Control Suite, STAR achieves significant improvements in sample efficiency compared to strong baseline algorithms.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Reinforcement learning in multi-dimensional state-action space using random rectangular coarse coding and Gibbs sampling
    Kimura, Hajime
    2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-9, 2007, : 88 - 95
  • [32] StARformer: Transformer with State-Action-Reward Representations for Visual Reinforcement Learning
    Shang, Jinghuan
    Kahatapitiya, Kumara
    Li, Xiang
    Ryoo, Michael S.
    COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 462 - 479
  • [33] Enhancing visual communication through representation learning
    Wei, Yuhan
    Lee, Changwook
    Han, Seokwon
    Kim, Anna
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [34] STATE REPRESENTATION LEARNING FOR EFFECTIVE DEEP REINFORCEMENT LEARNING
    Zhao, Jian
    Zhou, Wengang
    Zhao, Tianyu
    Zhou, Yun
    Li, Houqiang
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [35] Undesired state-action prediction in multi-agent reinforcement learning for linked multi-component robotic system control
    Fernandez-Gauna, Borja
    Marques, Ion
    Grana, Manuel
    INFORMATION SCIENCES, 2013, 232 : 309 - 324
  • [36] State Action Separable Reinforcement Learning
    Zhang, Ziyao
    Ma, Liang
    Leung, Kin K.
    Poularakis, Konstantinos
    Srivatsa, Mudhakar
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 123 - 132
  • [37] Scaling Up Q-Learning via Exploiting State-Action Equivalence
    Lyu, Yunlian
    Come, Aymeric
    Zhang, Yijie
    Talebi, Mohammad Sadegh
    ENTROPY, 2023, 25 (04)
  • [38] Automated Driving Highway Traffic Merging using Deep Multi-Agent Reinforcement Learning in Continuous State-Action Spaces
    Schester, Larry
    Ortiz, Luis E.
    2021 32ND IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2021, : 280 - 287
  • [39] Value Preserving State-Action Abstractions
    Abel, David
    Umbanhowar, Nate
    Khetarpal, Khimya
    Arumugam, Dilip
    Precup, Doina
    Littman, Michael
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 1639 - 1649
  • [40] Enhancing State Representation in Multi-Agent Reinforcement Learning for Platoon-Following Models
    Lin, Hongyi
    Lyu, Cheng
    He, Yixu
    Liu, Yang
    Gao, Kun
    Qu, Xiaobo
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (08) : 12110 - 12114