Deep Reinforcement Learning for Energy-Efficient Data Dissemination Through UAV Networks

被引:0
|
作者
Ali, Abubakar S. [1 ]
Al-Habob, Ahmed A. [2 ]
Naser, Shimaa [1 ]
Bariah, Lina [3 ]
Dobre, Octavia A. [2 ]
Muhaidat, Sami [1 ,4 ]
机构
[1] Khalifa Univ, KU 6G Res Ctr, Dept Comp & Informat Engn, Abu Dhabi, U Arab Emirates
[2] Mem Univ, Dept Elect & Comp Engn, St John, NF A1C 5S7, Canada
[3] Technol Innovat Inst, Abu Dhabi, U Arab Emirates
[4] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
关键词
Autonomous aerial vehicles; Internet of Things; Data dissemination; Optimization; Energy consumption; Heuristic algorithms; Energy efficiency; deep learning; Internet-of-Things (IoT); reinforcement learning (RL); unmanned aerial vehicle (UAV); SENSOR NETWORKS; MANAGEMENT; INTERNET;
D O I
10.1109/OJCOMS.2024.3398718
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The rise of the Internet of Things (IoT), marked by unprecedented growth in connected devices, has created an insatiable demand for supplementary computational and communication resources. The integration of Unmanned aerial vehicles (UAVs) within IoT ecosystems presents a promising avenue to surmount these obstacles, offering enhanced network coverage, agile deployment capabilities, and efficient data gathering from geographically challenging locales. UAVs have been recognized as a compelling solution, offering extended coverage, flexibility, and reachability for IoT networks. Despite these benefits, UAV technology faces significant challenges, including limited energy resources, the necessity for adaptive responses to dynamic environments, and the imperative for autonomous operation to fulfill the evolving demands of IoT networks. In light of this, we introduce an innovative UAV-assisted data dissemination framework that aims to minimize the total energy expenditure, considering both the UAV and all spatially-distributed IoT devices. Our framework addresses three interconnected subproblems: device classification, device association, and path planning. For device classification, we employ two distinct types of deep reinforcement learning (DRL) agents-Double Deep Q-Network (DDQN) and Proximal Policy Optimization (PPO)-to classify devices into two tiers. To tackle device association, we propose an approach based on the nearest-neighbor heuristic to associate Tier 2 devices with a Tier 1 device. For path planning, we propose an approach that utilizes the Lin-Kernighan heuristic to plan the UAV's path among the Tier 1 devices. We compare our method with three baseline approaches and demonstrate through simulation results that our approach significantly reduces energy consumption and offers a near-optimal solution in a fraction of the time required by brute force methods and ant colony heuristics. Consequently, our framework presents an efficient and practical alternative for energy-efficient data dissemination in UAV-assisted IoT networks.
引用
收藏
页码:5567 / 5583
页数:17
相关论文
共 50 条
  • [1] Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep Reinforcement Learning Approach
    Abedin, Sarder Fakhrul
    Munir, Md Shirajum
    Tran, Nguyen H.
    Han, Zhu
    Hong, Choong Seon
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (09) : 5994 - 6006
  • [2] Deep Reinforcement Learning for Energy-Efficient Fresh Data Collection in Rechargeable UAV-assisted IoT Networks
    Yi, Mengjie
    Wang, Xijun
    Liu, Juan
    Zhang, Yan
    Hou, Ronghui
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [3] Deep Reinforcement Learning for Energy-Efficient Federated Learning in UAV-Enabled Wireless Powered Networks
    Quang Vinh Do
    Quoc-Viet Pham
    Hwang, Won-Joo
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (01) : 99 - 103
  • [4] Delay-Sensitive Energy-Efficient UAV Crowdsensing by Deep Reinforcement Learning
    Dai, Zipeng
    Liu, Chi Harold
    Han, Rui
    Wang, Guoren
    Leung, Kin K. K.
    Tang, Jian
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (04) : 2038 - 2052
  • [5] Deep Reinforcement Learning for Energy-Efficient Power Control in Heterogeneous Networks
    Peng, Jianhao
    Zheng, Jiabao
    Zhang, Lin
    Xiao, Ming
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 141 - 146
  • [6] Deep Reinforcement Learning for Secrecy Energy-Efficient UAV Communication with Reconfigurable Intelligent Surface
    Tham, Mau-Luen
    Wong, Yi Jie
    Iqbal, Amjad
    Bin Ramli, Nordin
    Zhu, Yongxu
    Dagiuklas, Tasos
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [7] Energy-Efficient UAV Trajectory Design for Backscatter Communication: A Deep Reinforcement Learning Approach
    Nie, Yiwen
    Zhao, Junhui
    Liu, Jun
    Jiang, Jing
    Ding, Ruijin
    CHINA COMMUNICATIONS, 2020, 17 (10) : 129 - 141
  • [8] Energy-Efficient UAV Trajectory Design for Backscatter Communication: A Deep Reinforcement Learning Approach
    Yiwen Nie
    Junhui Zhao
    Jun Liu
    Jing Jiang
    Ruijin Ding
    China Communications, 2020, 17 (10) : 129 - 141
  • [9] Deep Reinforcement Learning Based Energy Efficient Multi-UAV Data Collection for IoT Networks
    Khodaparast, Seyed Saeed
    Lu, Xiao
    Wang, Ping
    Uyen Trang Nguyen
    IEEE OPEN JOURNAL OF VEHICULAR TECHNOLOGY, 2021, 2 : 249 - 260
  • [10] Energy-Efficient Multidimensional Trajectory of UAV-Aided IoT Networks With Reinforcement Learning
    Silvirianti
    Shin, Soo Young
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (19): : 19214 - 19226