Dynamic Reinforcement Learning based Scheduling for Energy-Efficient Edge-Enabled LoRaWAN

被引:3
|
作者
Mhatre, Jui [1 ]
Lee, Ahyoung [1 ]
机构
[1] Kennesaw State Univ, Dept Comp Sci, Marietta, GA 30060 USA
关键词
LoRaWAN; reinforcement learning; energy consumption; spread factor; scheduling; edge computing;
D O I
10.1109/IPCCC55026.2022.9894340
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Long Range Wide Area Network (LoRaWAN) is suitable for wide area sensor networks due to its low cost, long range, and low energy consumption. A device can transmit without interference if it chooses a unique channel, spread factor, transmission power different than any other transmitting device in network. However, in a dense network, the probability of interference increases because number of devices exceeds the total number of unique choices thus mandating retransmission after collision until successfully transmitted. Eventually, energy consumption of devices increases. In this poster, we present a Deep deterministic policy gradient reinforcement learning-based scheduling algorithm to improve energy efficiency by collision avoidance in a dense LoRaWAN network. We support our proposition with evaluation results for reducing energy consumption.
引用
收藏
页数:2
相关论文
共 50 条
  • [31] RL-Edge Relocator: a Reinforcement Learning based approach for Application Relocation in Edge-enabled 5G System
    Panek, Grzegorz
    Nouar, Nour El-Houda
    Fajjari, Ilhem
    Matysiak, Piotr
    Tarasiuk, Halina
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 4363 - 4368
  • [32] Edge-enabled Federated Learning for Vision based Product Quality Inspection
    Bharti, Sourabh
    McGibney, Alan
    O'Gorman, Tristan
    [J]. 2022 33RD IRISH SIGNALS AND SYSTEMS CONFERENCE (ISSC), 2022,
  • [33] A Double Deep Q-Learning Model for Energy-Efficient Edge Scheduling
    Zhang, Qingchen
    Lin, Man
    Yang, Laurence T.
    Chen, Zhikui
    Khan, Samee U.
    Li, Peng
    [J]. IEEE TRANSACTIONS ON SERVICES COMPUTING, 2019, 12 (05) : 739 - 749
  • [34] Towards Energy-Efficient Scheduling of UAV and Base Station Hybrid Enabled Mobile Edge Computing
    Dai, Bin
    Niu, Jianwei
    Ren, Tao
    Hu, Zheyuan
    Atiquzzaman, Mohammed
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (01) : 915 - 930
  • [35] Quadrupedal Locomotion in an Energy-efficient Way Based on Reinforcement Learning
    Hao, Tiantian
    Xu, De
    Yan, Shaohua
    [J]. INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2024, 22 (05) : 1613 - 1623
  • [36] Correlation Aware Scheduling for Edge-Enabled Industrial Internet of Things
    Zhu, Tongxin
    Cai, Zhipeng
    Fang, Xiaolin
    Luo, Junzhou
    Yang, Ming
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (11) : 7967 - 7976
  • [37] Deep Reinforcement Learning for Energy-Efficient Federated Learning in UAV-Enabled Wireless Powered Networks
    Quang Vinh Do
    Quoc-Viet Pham
    Hwang, Won-Joo
    [J]. IEEE COMMUNICATIONS LETTERS, 2022, 26 (01) : 99 - 103
  • [38] Energy-Efficient Dynamic Scheduling on Parallel Machines
    Kang, Jaeyeon
    Ranka, Sanjay
    [J]. HIGH PERFORMANCE COMPUTING - HIPC 2008, PROCEEDINGS, 2008, 5374 : 208 - 219
  • [39] REINFORCEMENT LEARNING FOR ENERGY-EFFICIENT WIRELESS TRANSMISSION
    Mastronarde, Nicholas
    van der Schaar, Mihaela
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 3452 - 3455
  • [40] Deep Reinforcement Learning for Energy-Efficient Computation Offloading in Mobile-Edge Computing
    Zhou, Huan
    Jiang, Kai
    Liu, Xuxun
    Li, Xiuhua
    Leung, Victor C. M.
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (02) : 1517 - 1530