Deep reinforcement learning with significant multiplications inference

被引:0
|
作者
Ivanov, Dmitry A. [1 ,2 ]
Larionov, Denis A. [2 ,3 ]
Kiselev, Mikhail V. [2 ,3 ]
Dylov, Dmitry V. [4 ,5 ]
机构
[1] Lomonosov Moscow State Univ, GSP 1,Leninskie Gory, Moscow 119991, Russia
[2] Cifrum, 3 Kholodilnyy per, Moscow 115191, Russia
[3] Chuvash State Univ, 15 Moskovsky pr, Cheboksary 428015, Chuvash, Russia
[4] Skolkovo Inst Sci & Technol, 30 1 Bolshoi blvd, Moscow 121205, Russia
[5] Artificial Intelligence Res Inst, 32 1 Kutuzovsky pr, Moscow 121170, Russia
基金
俄罗斯基础研究基金会;
关键词
D O I
10.1038/s41598-023-47245-y
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
We propose a sparse computation method for optimizing the inference of neural networks in reinforcement learning (RL) tasks. Motivated by the processing abilities of the brain, this method combines simple neural network pruning with a delta-network algorithm to account for the input data correlations. The former mimics neuroplasticity by eliminating inefficient connections; the latter makes it possible to update neuron states only when their changes exceed a certain threshold. This combination significantly reduces the number of multiplications during the neural network inference for fast neuromorphic computing. We tested the approach in popular deep RL tasks, yielding up to a 100-fold reduction in the number of required multiplications without substantial performance loss (sometimes, the performance even improved).
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Deep reinforcement learning with significant multiplications inference
    Dmitry A. Ivanov
    Denis A. Larionov
    Mikhail V. Kiselev
    Dmitry V. Dylov
    Scientific Reports, 13
  • [2] Symbolic Task Inference in Deep Reinforcement Learning
    Hasanbeig, Hosein
    Jeppu, Natasha Yogananda
    Abate, Alessandro
    Melham, Tom
    Kroening, Daniel
    Journal of Artificial Intelligence Research, 2024, 80 : 1099 - 1137
  • [3] Symbolic Task Inference in Deep Reinforcement Learning
    Hasanbeig, Hosein
    Jeppu, Natasha Yogananda
    Abate, Alessandro
    Melham, Tom
    Kroening, Daniel
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 80 : 1099 - 1137
  • [4] A CGRA based Neural Network Inference Engine for Deep Reinforcement Learning
    Liang, Minglan
    Chen, Mingsong
    Wang, Zheng
    Sun, Jingwei
    2018 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS (APCCAS 2018), 2018, : 540 - 543
  • [5] Deep Reinforcement Learning Based Resource Management for DNN Inference in IIoT
    Zhang, Weiting
    Yang, Dong
    Peng, Haixia
    Wu, Wen
    Quan, Wei
    Zhang, Hongke
    Shen, Xuemin
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [6] Spatiotemporal Costmap Inference for MPC Via Deep Inverse Reinforcement Learning
    Lee, Keuntaek
    Isele, David
    Theodorou, Evangelos A.
    Bae, Sangjae
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 3194 - 3201
  • [7] Substation Operation Sequence Inference Model Based on Deep Reinforcement Learning
    Chen, Tie
    Li, Hongxin
    Cao, Ying
    Zhang, Zhifan
    APPLIED SCIENCES-BASEL, 2023, 13 (13):
  • [8] Active Task-Inference-Guided Deep Inverse Reinforcement Learning
    Memarian, Farzan
    Xu, Zhe
    Wu, Bo
    Wen, Min
    Topcu, Ufuk
    2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2020, : 1932 - 1938
  • [9] Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
    Shao, Yulin
    Rezaee, Arman
    Liew, Soung Chang
    Chan, Vincent W. S.
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [10] Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
    Shao, Yulin
    Rezaee, Arman
    Liew, Soung Chang
    Chan, Vincent W. S.
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (10) : 2234 - 2248