Dynamic Reinforcement Learning based Scheduling for Energy-Efficient Edge-Enabled LoRaWAN

被引:3
|
作者
Mhatre, Jui [1 ]
Lee, Ahyoung [1 ]
机构
[1] Kennesaw State Univ, Dept Comp Sci, Marietta, GA 30060 USA
关键词
LoRaWAN; reinforcement learning; energy consumption; spread factor; scheduling; edge computing;
D O I
10.1109/IPCCC55026.2022.9894340
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Long Range Wide Area Network (LoRaWAN) is suitable for wide area sensor networks due to its low cost, long range, and low energy consumption. A device can transmit without interference if it chooses a unique channel, spread factor, transmission power different than any other transmitting device in network. However, in a dense network, the probability of interference increases because number of devices exceeds the total number of unique choices thus mandating retransmission after collision until successfully transmitted. Eventually, energy consumption of devices increases. In this poster, we present a Deep deterministic policy gradient reinforcement learning-based scheduling algorithm to improve energy efficiency by collision avoidance in a dense LoRaWAN network. We support our proposition with evaluation results for reducing energy consumption.
引用
收藏
页数:2
相关论文
共 50 条
  • [1] Edge-Enabled Two-Stage Scheduling Based on Deep Reinforcement Learning for Internet of Everything
    Zhou, Xiaokang
    Liang, Wei
    Yan, Ke
    Li, Weimin
    Wang, Kevin I-Kai
    Ma, Jianhua
    Jin, Qun
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (04) : 3295 - 3304
  • [2] An Energy-Efficient Dynamic Offloading Algorithm for Edge Computing Based on Deep Reinforcement Learning
    Zhu, Keyu
    Li, Shaobo
    Zhang, Xingxing
    Wang, Jinming
    Xie, Cankun
    Wu, Fengbin
    Xie, Rongxiang
    [J]. IEEE ACCESS, 2024, 12 : 127489 - 127506
  • [3] Energy-efficient VM scheduling based on deep reinforcement learning
    Wang, Bin
    Liu, Fagui
    Lin, Weiwei
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 125 : 616 - 628
  • [4] EPtask: Deep Reinforcement Learning Based Energy-Efficient and Priority-Aware Task Scheduling for Dynamic Vehicular Edge Computing
    Li, Peisong
    Xiao, Ziren
    Wang, Xinheng
    Huang, Kaizhu
    Huang, Yi
    Gao, Honghao
    [J]. IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 1830 - 1846
  • [5] Deep reinforcement learning for dynamic scheduling of energy-efficient automated guided vehicles
    Zhang, Lixiang
    Yan, Yan
    Hu, Yaoguang
    [J]. JOURNAL OF INTELLIGENT MANUFACTURING, 2023, 35 (08) : 3875 - 3888
  • [6] Reinforcement Learning for Energy-efficient Edge Caching in Mobile Edge Networks
    Zheng, Hantong
    Zhou, Huan
    Wang, Ning
    Chen, Peng
    Xu, Shouzhi
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [7] Reinforcement Learning based Dynamic Link Configuration for Energy-Efficient NoC
    Reza, Md Farhadur
    [J]. 2020 IEEE 63RD INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2020, : 468 - 473
  • [8] Reinforcement Learning Based Energy-Efficient Collaborative Inference for Mobile Edge Computing
    Xiao, Yilin
    Xiao, Liang
    Wan, Kunpeng
    Yang, Helin
    Zhang, Yi
    Wu, Yi
    Zhang, Yanyong
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (02) : 864 - 876
  • [9] DRLD-SP: A Deep-Reinforcement-Learning-Based Dynamic Service Placement in Edge-Enabled Internet of Vehicles
    Talpur, Anum
    Gurusamy, Mohan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (08): : 6239 - 6251
  • [10] HFDRL: An Intelligent Dynamic Cooperate Cashing Method Based on Hierarchical Federated Deep Reinforcement Learning in Edge-Enabled IoT
    Majidi, Fariba
    Khayyambashi, Mohammad Reza
    Barekatain, Behrang
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (02) : 1402 - 1413