Throughput Maximization by Deep Reinforcement Learning With Energy Cooperation for Renewable Ultradense IoT Networks

被引:13
|
作者
Li, Ya [1 ]
Zhao, Xiaohui [1 ]
Liang, Hui [1 ]
机构
[1] Jilin Univ, Coll Commun Engn, Changchun 130012, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2020年 / 7卷 / 09期
基金
中国国家自然科学基金;
关键词
Reinforcement learning; Internet of Things; Throughput; Resource management; Base stations; Energy harvesting; Renewable energy sources; Deep reinforcement learning (DRL); energy cooperation; energy harvesting (EH); Internet of Things (IoT); renewable ultradense networks (UDNs); throughput maximization; POWER ALLOCATION; DENSE NETWORKS; SYSTEMS; INTERNET;
D O I
10.1109/JIOT.2020.3002936
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Ultradense network (UDN) is considered as one of the key technologies for the explosive growth of mobile traffic demand on the Internet of Things (IoT). It enhances network capacity by deploying small base stations in large quantities, but it simultaneously causes great energy consumption. In this article, we use energy harvesting (EH) and energy cooperation technologies to maximize system throughput and save energy. Considering that the energy arrival process and channel information are not available a priori, we propose an optimal deep reinforcement learning (DRL) algorithm to solve this average throughput maximization problem over a finite horizon. We also propose a multiagent DRL method to solve the dimensionality problem caused by the expansion of the state and action dimensions. Finally, we compare these algorithms with two traditional algorithms, greedy algorithm and conservative algorithm. The numerical results show that the proposed algorithms are valid and effective in increasing system average throughput on the long term.
引用
收藏
页码:9091 / 9102
页数:12
相关论文
共 50 条
  • [31] Enhanced Group Influence Maximization in Social Networks Using Deep Reinforcement Learning
    Ghosh, Smita
    Chen, Tiantian
    Wu, Weili
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024,
  • [32] Deep Reinforcement Learning Based Active Queue Management for IoT Networks
    Minsu Kim
    Muhammad Jaseemuddin
    Alagan Anpalagan
    Journal of Network and Systems Management, 2021, 29
  • [33] Energy efficient deep reinforcement learning approach to control the traffic flow in iot networks for smart city
    Mrinai M. Dhanvijay
    Shailaja C. Patil
    Journal of Ambient Intelligence and Humanized Computing, 2024, 15 (12) : 3945 - 3961
  • [34] Deep Reinforcement Learning Based Energy Efficient Multi-UAV Data Collection for IoT Networks
    Khodaparast, Seyed Saeed
    Lu, Xiao
    Wang, Ping
    Uyen Trang Nguyen
    IEEE OPEN JOURNAL OF VEHICULAR TECHNOLOGY, 2021, 2 : 249 - 260
  • [35] Multipath Cooperative Routing in Ultradense LEO Satellite Networks: A Deep-Reinforcement-Learning-Based Approach
    Liu, Xiaoyu
    Zhou, Haibo
    Zhang, Zitian
    Gao, Qiangzhou
    Ma, Ting
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (02): : 1789 - 1804
  • [36] Joint Optimization of Data Offloading and Resource Allocation With Renewable Energy Aware for IoT Devices: A Deep Reinforcement Learning Approach
    Ke, Hongchang
    Wang, Jian
    Wang, Hui
    Ge, Yuming
    IEEE ACCESS, 2019, 7 : 179349 - 179363
  • [37] Multi-Objective Optimization of Energy Saving and Throughput in Heterogeneous Networks Using Deep Reinforcement Learning
    Ryu, Kyungho
    Kim, Wooseong
    SENSORS, 2021, 21 (23)
  • [38] Minimum Throughput Maximization for Multi-UAV Enabled WPCN: A Deep Reinforcement Learning Method
    Tang, Jie
    Song, Jingru
    Ou, Junhui
    Luo, Jingci
    Zhang, Xiuyin
    Wong, Kai-Kit
    IEEE ACCESS, 2020, 8 : 9124 - 9132
  • [39] Minimum Throughput Maximization for Multi-UAV Enabled WPCN: A Deep Reinforcement Learning Method
    Tang, Jie
    Song, Jingru
    Ou, Junhui
    Luo, Jingci
    Zhang, Xiuyin
    Wong, Kai-Kit
    IEEE Access, 2020, 8 : 9124 - 9132
  • [40] Throughput Maximization in Cooperation Based Symbiotic Cognitive Radio Networks
    Xue, Peng
    Gong, Peng
    Kim, Duk Kyung
    IEICE TRANSACTIONS ON COMMUNICATIONS, 2010, E93B (11) : 3207 - 3210