Joint routing and computation offloading based deep reinforcement learning for Flying Ad hoc Networks

被引:0
|
作者
Lin, Na [1 ]
Huang, Jinjiao [1 ]
Hawbani, Ammar [1 ]
Zhao, Liang [1 ]
Tang, Hailun [1 ]
Guan, Yunchong [1 ]
Sun, Yunhe [1 ]
机构
[1] Shenyang Aerosp Univ, Sch Comp Sci, Shenyang, Peoples R China
基金
中国国家自然科学基金;
关键词
Unmanned Aerial Vehicles (UAVs); Computation offloading; Routing; Flying Ad-hoc Networks (FANETs); RESOURCE-ALLOCATION; UAV; OPTIMIZATION; DESIGN;
D O I
10.1016/j.comnet.2024.110514
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Flying ad -hoc networks (FANETs) consisting of multiple Unmanned Aerial Vehicles (UAVs) are widely used due to their flexibility and low cost. In scenarios such as crowdsensing and data collection, data collected by UAVs are transmitted to base stations for processing and then sent to data centers. Still, the deployment of base stations is costly and inflexible. To address this issue, this paper introduces a position -based Computing First Routing (CFR) protocol designed for efficient task transmission and computation offloading in FANETs. This protocol facilitates task processing during data transfer and ensures the delivery of fully processed results to the data center. Considering the dynamically changing topology of the FANETs and the uneven distribution of the UAVs' computation power, deep reinforcement learning is used to make multi -objective decisions based on the Q -values computed by the model. FANETs are centerless clusters, and two -hop neighbor tables containing position and computing power information are used to make less costly decisions. Simulation experiments demonstrate that CFR outperforms other benchmark schemes with an approximately 6% higher packet delivery rate, an approximately 21% reduction in end -to -end delay, and about a 34% decrease in total cost. Furthermore, it effectively ensures the completion of task offloading before reaching the destination node. This occurs due to the design of a hierarchical reward function that takes into account dynamic changes in delay and energy consumption, as well as the injection of neighbor computing power information into the two -hop neighbor table.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Reinforcement Learning-Based Routing Protocols in Flying Ad Hoc Networks (FANET): A Review
    Lansky, Jan
    Ali, Saqib
    Rahmani, Amir Masoud
    Yousefpoor, Mohammad Sadegh
    Yousefpoor, Efat
    Khan, Faheem
    Hosseinzadeh, Mehdi
    MATHEMATICS, 2022, 10 (16)
  • [2] Reinforcement learning for routing in ad hoc networks
    Nurmi, Petteri
    2007 5TH INTERNATIONAL SYMPOSIUM ON MODELING AND OPTIMIZATION IN MOBILE, AD HOC AND WIRELESS NETWORKS AND WORKSHOPS, VOLS 1-2, 2007, : 200 - 207
  • [3] REINFORCEMENT LEARNING-BASED ROUTING PROTOCOLS IN VEHICULAR AND FLYING AD HOC NETWORKS - A LITERATURE SURVEY
    Bugarcic, Pavle
    Jevtic, Nenad
    Malnar, Marija
    PROMET-TRAFFIC & TRANSPORTATION, 2022, 34 (06): : 893 - 906
  • [4] New Bargaining Game Based Computation Offloading Scheme for Flying Ad-hoc Networks
    Kim, Sungwook
    IEEE ACCESS, 2019, 7 : 147038 - 147047
  • [5] Deep reinforcement learning enhanced skeleton based pipe routing for high-throughput transmission in flying ad-hoc networks
    Toorchi, Niloofar
    Lyu, Weiqiang
    He, Linsheng
    Zhao, Jiamiao
    Rasheed, Iftikhar
    Hu, Fei
    COMPUTER NETWORKS, 2024, 244
  • [6] On the Routing in Flying Ad hoc Networks
    Tareque, Md. Hasan
    Hossain, Md. Shohrab
    Atiquzzaman, Mohammed
    PROCEEDINGS OF THE 2015 FEDERATED CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2015, 5 : 1 - 9
  • [7] Intelligent Vehicle Computation Offloading in Vehicular Ad Hoc Networks: A Multi-Agent LSTM Approach with Deep Reinforcement Learning
    Sun, Dingmi
    Chen, Yimin
    Li, Hao
    MATHEMATICS, 2024, 12 (03)
  • [8] Joint Offloading, Communication and Collaborative Computation Using Deep Reinforcement Learning in MEC Networks
    Nie, Xuefang
    Chen, Xingbang
    Zhang, DingDing
    Zhou, Tianqing
    Zhang, Jiliang
    2023 IEEE/CIC International Conference on Communications in China, ICCC Workshops 2023, 2023,
  • [9] A joint task caching and computation offloading scheme based on deep reinforcement learning
    Huizi Tian
    Lin Zhu
    Long Tan
    Peer-to-Peer Networking and Applications, 2025, 18 (1)
  • [10] A Deep Reinforcement Learning based Offloading Scheme in Ad-hoc Mobile Clouds
    Van Le, Duc
    Tham, Chen-Khong
    IEEE INFOCOM 2018 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2018, : 760 - 765