The State of Sparse Training in Deep Reinforcement Learning

被引:0
|
作者
Graesser, Laura [1 ,2 ]
Evci, Utku [2 ]
Elsen, Erich [3 ]
Castro, Pablo Samuel [2 ]
机构
[1] Google, Robot, Mountain View, CA 94043 USA
[2] Google Res, Ottawa, ON, Canada
[3] Adept, San Francisco, CA USA
关键词
NEURAL-NETWORKS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The use of sparse neural networks has seen rapid growth in recent years, particularly in computer vision. Their appeal stems largely from the reduced number of parameters required to train and store, as well as in an increase in learning efficiency. Somewhat surprisingly, there have been very few efforts exploring their use in Deep Reinforcement Learning (DRL). In this work we perform a systematic investigation into applying a number of existing sparse training techniques on a variety of DRL agents and environments. Our results corroborate the findings from sparse training in the computer vision domain - sparse networks perform better than dense networks for the same parameter count - in the DRL domain. We provide detailed analyses on how the various components in DRL are affected by the use of sparse networks and conclude by suggesting promising avenues for improving the effectiveness of sparse training methods, as well as for advancing their use in DRL1.
引用
收藏
页数:27
相关论文
共 50 条
  • [1] A Novel Topology Adaptation Strategy for Dynamic Sparse Training in Deep Reinforcement Learning
    Xu, Meng
    Chen, Xinhong
    Wang, Jianping
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [2] Deep sparse representation via deep dictionary learning for reinforcement learning
    Tang, Jianhao
    Li, Zhenni
    Xie, Shengli
    Ding, Shuxue
    Zheng, Shaolong
    Chen, Xueni
    [J]. 2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2398 - 2403
  • [3] Bayesian Reinforcement Learning via Deep, Sparse Sampling
    Grover, Divya
    Basu, Debabrota
    Dimitrakakis, Christos
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 3036 - 3044
  • [4] Learning Markov State Abstractions for Deep Reinforcement Learning
    Allen, Cameron
    Parikh, Neev
    Gottesman, Omer
    Konidaris, George
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [5] STATE REPRESENTATION LEARNING FOR EFFECTIVE DEEP REINFORCEMENT LEARNING
    Zhao, Jian
    Zhou, Wengang
    Zhao, Tianyu
    Zhou, Yun
    Li, Houqiang
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [6] Cell Selection with Deep Reinforcement Learning in Sparse Mobile Crowdsensing
    Wang, Leye
    Liu, Wenbin
    Zhang, Daqing
    Wang, Yasha
    Wang, En
    Yang, Yongjian
    [J]. 2018 IEEE 38TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS), 2018, : 1543 - 1546
  • [7] Privacy preservation in deep reinforcement learning: A training perspective
    Shen, Sheng
    Ye, Dayong
    Zhu, Tianqing
    Zhou, Wanlei
    [J]. Knowledge-Based Systems, 2024, 304
  • [8] On Training Flexible Robots using Deep Reinforcement Learning
    Dwiel, Zach
    Candadai, Madhavun
    Phielipp, Mariano
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4666 - 4671
  • [9] A framework for training larger networks for deep Reinforcement learning
    Ota, Kei
    Jha, Devesh K.
    Kanezaki, Asako
    [J]. MACHINE LEARNING, 2024, 113 (09) : 6115 - 6139
  • [10] Training a Robotic Arm Movement with Deep Reinforcement Learning
    Ni, Xiaohan
    He, Xin
    Matsumaru, Takafumi
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE-ROBIO 2021), 2021, : 595 - 600