Deep Reinforcement Learning for Energy-Efficient Data Dissemination Through UAV Networks

被引:0
|
作者
Ali, Abubakar S. [1 ]
Al-Habob, Ahmed A. [2 ]
Naser, Shimaa [1 ]
Bariah, Lina [3 ]
Dobre, Octavia A. [2 ]
Muhaidat, Sami [1 ,4 ]
机构
[1] Khalifa Univ, KU 6G Res Ctr, Dept Comp & Informat Engn, Abu Dhabi, U Arab Emirates
[2] Mem Univ, Dept Elect & Comp Engn, St John, NF A1C 5S7, Canada
[3] Technol Innovat Inst, Abu Dhabi, U Arab Emirates
[4] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
关键词
Autonomous aerial vehicles; Internet of Things; Data dissemination; Optimization; Energy consumption; Heuristic algorithms; Energy efficiency; deep learning; Internet-of-Things (IoT); reinforcement learning (RL); unmanned aerial vehicle (UAV); SENSOR NETWORKS; MANAGEMENT; INTERNET;
D O I
10.1109/OJCOMS.2024.3398718
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The rise of the Internet of Things (IoT), marked by unprecedented growth in connected devices, has created an insatiable demand for supplementary computational and communication resources. The integration of Unmanned aerial vehicles (UAVs) within IoT ecosystems presents a promising avenue to surmount these obstacles, offering enhanced network coverage, agile deployment capabilities, and efficient data gathering from geographically challenging locales. UAVs have been recognized as a compelling solution, offering extended coverage, flexibility, and reachability for IoT networks. Despite these benefits, UAV technology faces significant challenges, including limited energy resources, the necessity for adaptive responses to dynamic environments, and the imperative for autonomous operation to fulfill the evolving demands of IoT networks. In light of this, we introduce an innovative UAV-assisted data dissemination framework that aims to minimize the total energy expenditure, considering both the UAV and all spatially-distributed IoT devices. Our framework addresses three interconnected subproblems: device classification, device association, and path planning. For device classification, we employ two distinct types of deep reinforcement learning (DRL) agents-Double Deep Q-Network (DDQN) and Proximal Policy Optimization (PPO)-to classify devices into two tiers. To tackle device association, we propose an approach based on the nearest-neighbor heuristic to associate Tier 2 devices with a Tier 1 device. For path planning, we propose an approach that utilizes the Lin-Kernighan heuristic to plan the UAV's path among the Tier 1 devices. We compare our method with three baseline approaches and demonstrate through simulation results that our approach significantly reduces energy consumption and offers a near-optimal solution in a fraction of the time required by brute force methods and ant colony heuristics. Consequently, our framework presents an efficient and practical alternative for energy-efficient data dissemination in UAV-assisted IoT networks.
引用
收藏
页码:5567 / 5583
页数:17
相关论文
共 50 条
  • [31] Energy-Efficient Multi-UAV Network using Multi-Agent Deep Reinforcement Learning
    Ju, Hyungyu
    Shim, Byonghyo
    2022 IEEE VTS ASIA PACIFIC WIRELESS COMMUNICATIONS SYMPOSIUM, APWCS, 2022, : 70 - 74
  • [32] Energy-efficient Multiaccess Dissemination Networks
    Pentikousis, Kostas
    2009 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION WORKSHOPS, VOLS 1 AND 2, 2009, : 454 - 458
  • [33] Data-driven Energy-efficient Adaptive Sampling Using Deep Reinforcement Learning
    Demirel B.U.
    Chen L.
    Al Faruque M.A.
    ACM Transactions on Computing for Healthcare, 2023, 4 (03):
  • [34] Energy-efficient UAV-wireless networks for data collection
    Bani-Hani, Khaled
    Hayajneh, Khaled F.
    Jaradat, Abdullah
    Shakhatreh, Hazim
    PHYSICAL COMMUNICATION, 2023, 60
  • [35] An Energy-Efficient Hardware Accelerator for Hierarchical Deep Reinforcement Learning
    Shiri, Aidin
    Prakash, Bharat
    Mazumder, Arnab Neelim
    Waytowich, Nicholas R.
    Oates, Tim
    Mohsenin, Tinoosh
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [36] Energy-efficient VM scheduling based on deep reinforcement learning
    Wang, Bin
    Liu, Fagui
    Lin, Weiwei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 125 : 616 - 628
  • [37] Energy-Efficient IoT Sensor Calibration With Deep Reinforcement Learning
    Ashiquzzaman, Akm
    Lee, Hyunmin
    Um, Tai-Won
    Kim, Jinsul
    IEEE ACCESS, 2020, 8 : 97045 - 97055
  • [38] Energy-Efficient UAV-Enabled Data Collection via Wireless Charging: A Reinforcement Learning Approach
    Fu, Shu
    Tang, Yujie
    Wu, Yuan
    Zhang, Ning
    Gu, Huaxi
    Chen, Chen
    Liu, Min
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (12) : 10209 - 10219
  • [39] Energy-Efficient UAV Communications with Interference Management: Deep Learning Framework
    Ghavimi, Fayezeh
    Jantti, Riku
    2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE WORKSHOPS (WCNCW), 2020,
  • [40] Energy-efficient UAV-enabled computation offloading for industrial internet of things: a deep reinforcement learning approach
    Shi, Shuo
    Wang, Meng
    Gu, Shushi
    Zheng, Zhong
    WIRELESS NETWORKS, 2024, 30 (05) : 3921 - 3934