MODELING THE ENVIRONMENT IN DEEP REINFORCEMENT LEARNING: THE CASE OF ENERGY HARVESTING BASE STATIONS

被引:0
|
作者
Piovesan, Nicola [1 ]
Miozzo, Marco [1 ]
Dini, Paolo [1 ]
机构
[1] CTTC CERCA, Av Carl Friedrich Gauss 7, Barcelona 08860, Spain
基金
欧盟地平线“2020”;
关键词
reinforcement learning; deep learning; energy sustainability; mobile networks;
D O I
10.1109/icassp40776.2020.9054646
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In this paper, we focus on the design of energy self-sustainable mobile networks by enabling intelligent energy management that allows the base stations to mostly operate off-grid by using renewable energy. We propose a centralized control algorithm based on Deep Reinforcement Learning. The single agent is able to learn how to efficiently balance the energy inflow and spending among base stations observing the environment and interacting with it. In particular, we provide a study on the performance achieved by this approach when considering different representations of the environment. Numerical results demonstrate that using a good level of abstraction in the choice of the representation variables may enable a proper mapping of the environment into actions to take, so as to maximize the numerical reward.
引用
收藏
页码:8996 / 9000
页数:5
相关论文
共 50 条
  • [1] Placement Optimization of Aerial Base Stations with Deep Reinforcement Learning
    Qiu, Jin
    Lyu, Jiangbin
    Fu, Liqun
    [J]. ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [2] Dispatching Strategy of Energy Storage for 5G Base Stations Based on Deep Reinforcement Learning
    Jiang T.
    Xie L.
    Du Y.
    Ouyang C.
    [J]. Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2023, 47 (09): : 147 - 157
  • [3] Dynamic deployment of multi-UAV base stations with deep reinforcement learning
    Wu, Guanhan
    Jia, Weimin
    Zhao, Jianwei
    [J]. ELECTRONICS LETTERS, 2021, 57 (15) : 600 - 602
  • [4] Layered Learning Radio Resource Management for Energy Harvesting Small Base Stations
    Miozzo, Marco
    Dini, Paolo
    [J]. 2018 IEEE 87TH VEHICULAR TECHNOLOGY CONFERENCE (VTC SPRING), 2018,
  • [5] Deep Reinforcement Learning for Task Allocation in Energy Harvesting Mobile Crowdsensing
    Dongare, Sumedh
    Ortiz, Andrea
    Klein, Anja
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 269 - 274
  • [6] Deep Reinforcement Learning-Assisted Energy Harvesting Wireless Networks
    Ye, Junliang
    Gharavi, Hamid
    [J]. IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2021, 5 (02): : 990 - 1002
  • [7] Wireless Power and Energy Harvesting Control in IoD by Deep Reinforcement Learning
    Yao, Jingjing
    Ansari, Nirwan
    [J]. IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2021, 5 (02): : 980 - 989
  • [8] Temporal-Spatial Recommendation for Caching at Base Stations via Deep Reinforcement Learning
    Guo, Kaiyang
    Yang, Chenyang
    [J]. IEEE ACCESS, 2019, 7 : 58519 - 58532
  • [9] A Transfer Deep Q-Learning Framework for Resource Competition in Virtual Mobile Networks With Energy-Harvesting Base Stations
    Do, Quang Vinh
    Koo, Insoo
    [J]. IEEE SYSTEMS JOURNAL, 2021, 15 (01): : 319 - 330
  • [10] Local Energy Trending Behavior Modeling With Deep Reinforcement Learning
    Chen, Tao
    Su, Wencong
    [J]. IEEE ACCESS, 2018, 6 : 62806 - 62814