NEURAL DISCRETE ABSTRACTION OF HIGH-DIMENSIONAL SPACES: A CASE STUDY IN REINFORCEMENT LEARNING

被引:0
|
作者
Giannakopoulos, Petros [1 ]
Pikrakis, Aggelos [2 ]
Cotronis, Yannis [1 ]
机构
[1] Natl & Kapodistrian Univ Athens, Athens, Greece
[2] Univ Piraeus, Piraeus, Greece
关键词
state abstraction; discrete representations; reinforcement learning; vector-quantized auto-encoder;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
We employ Neural Discrete Representation Learning to map a high-dimensional state space, made up from raw video frames of a Reinforcement Learning agent's interactions with the environment, into a low-dimensional state space made up from learned discrete latent representations. We show experimentally that the discrete latents learned by the encoder of a Vector Quantized Auto-Encoder (VQ-AE) model trained to reconstruct the raw video frames making up the high-dimensional state space, can serve as meaningful abstractions of clusters of correlated frames. A low-dimensional state space can then be successfully constructed, where each individual state is a quantized vector encoding representing a cluster of correlated frames of the high-dimensional state space. Experimental results for a 3D navigation task in a maze environment constructed in Minecraft demonstrate that this discrete mapping can be used in addition to, or in place of, the high-dimensional space to improve the agent's learning performance.
引用
收藏
页码:1517 / 1521
页数:5
相关论文
共 50 条
  • [21] Reinforcement Learning on Slow Features of High-Dimensional Input Streams
    Legenstein, Robert
    Wilbert, Niko
    Wiskott, Laurenz
    PLOS COMPUTATIONAL BIOLOGY, 2010, 6 (08)
  • [22] Estimator learning automata for feature subset selection in high-dimensional spaces, case study: Email spam detection
    Seyyedi, Seyyed Hossein
    Minaei-Bidgoli, Behrouz
    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2018, 31 (08)
  • [23] Hierarchical reinforcement learning of low-dimensional subgoals and high-dimensional trajectories
    Morimoto, J
    Doya, K
    ICONIP'98: THE FIFTH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING JOINTLY WITH JNNS'98: THE 1998 ANNUAL CONFERENCE OF THE JAPANESE NEURAL NETWORK SOCIETY - PROCEEDINGS, VOLS 1-3, 1998, : 850 - 853
  • [24] Feature Selection and Feature Learning for High-dimensional Batch Reinforcement Learning: A Survey
    De-Rong Liu
    Hong-Liang Li
    Ding Wang
    International Journal of Automation and Computing, 2015, (03) : 229 - 242
  • [25] Feature Selection and Feature Learning for High-dimensional Batch Reinforcement Learning: A Survey
    Liu, De-Rong
    Li, Hong-Liang
    Wang, Ding
    INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, 2015, 12 (03) : 229 - 242
  • [26] A state space compression method based on multivariate analysis for reinforcement learning in high-dimensional continuous state spaces
    Satoh, Hideki
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2006, E89A (08): : 2181 - 2191
  • [27] EM in high-dimensional spaces
    Draper, BA
    Elliott, DL
    Hayes, J
    Baek, K
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2005, 35 (03): : 571 - 577
  • [28] The mathematics of high-dimensional spaces
    Rogers, D
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 1998, 215 : U524 - U524
  • [29] High-dimensional reinforcement learning for optimization and control of ultracold quantum gases
    Milson, N.
    Tashchilina, A.
    Ooi, T.
    Czarnecka, A.
    Ahmad, Z. F.
    Leblanc, L. J.
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (04):
  • [30] Scaling Marginalized Importance Sampling to High-Dimensional State-Spaces via State Abstraction
    Pavse, Brahma S.
    Hanna, Josiah P.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9417 - 9425