A Multi-Agent Reinforcement Learning-Based Optimized Routing for QoS in IoT

被引:5
|
作者
Jeaunita, T. C. Jermin [1 ,2 ]
Sarasvathi, V [1 ,2 ]
机构
[1] PESIT, Bangalore South Campus, Bangalore, Karnataka, India
[2] Visvesvaraya Technol Univ, Belagavi, Karnataka, India
关键词
QoS routing; multi-agent system; Internet of Things (IoT); reinforcement learning; RPL routing;
D O I
10.2478/cait-2021-0042
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Routing Protocol for Low power and lossy networks (RPL) is used as a routing protocol in IoT applications. In an endeavor to bring out an optimized approach for providing Quality of Service (QoS) routing for heavy volume IoT data transmissions this paper proposes a machine learning-based routing algorithm with a multi-agent environment. The overall routing process is divided into two phases: route discovery phase and route maintenance phase. The route discovery or path finding phase is performed using rank calculation and Q-routing. Q-routing is performed with Q-Learning reinforcement machine learning approach, for selecting the next hop node. The proposed routing protocol first creates a Destination Oriented Directed Acyclic Graph (DODAG) using Q-Learning. The second phase is route maintenance. In this paper, we also propose an approach for route maintenance that considerably reduces control overheads as shown by the simulation and has shown less delay in routing convergence.
引用
收藏
页码:45 / 61
页数:17
相关论文
共 50 条
  • [1] Multi-Agent Deep Reinforcement Learning-Based Algorithm For Fast Generalization On Routing Problems
    Barbahan, Ibraheem
    Baikalov, Vladimir
    Vyatkin, Valeriy
    Filchenkov, Andrey
    [J]. 10TH INTERNATIONAL YOUNG SCIENTISTS CONFERENCE IN COMPUTATIONAL SCIENCE (YSC2021), 2021, 193 : 228 - 238
  • [2] A reinforcement learning-based multi-agent framework applied for solving routing and scheduling problems
    Lopes Silva, Maria Amelia
    de Souza, Sergio Ricardo
    Freitas Souza, Marcone Jamilson
    Bazzan, Ana Lucia C.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2019, 131 : 148 - 171
  • [3] Distributed localization for IoT with multi-agent reinforcement learning
    Jia, Jie
    Yu, Ruoying
    Du, Zhenjun
    Chen, Jian
    Wang, Qinghu
    Wang, Xingwei
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (09): : 7227 - 7240
  • [4] Distributed localization for IoT with multi-agent reinforcement learning
    Jie Jia
    Ruoying Yu
    Zhenjun Du
    Jian Chen
    Qinghu Wang
    Xingwei Wang
    [J]. Neural Computing and Applications, 2022, 34 : 7227 - 7240
  • [5] Multi-Agent Reinforcement Learning-Based Routing Protocol for Underwater Wireless Sensor Networks With Value of Information
    Wang, Chao
    Shen, Xiaohong
    Wang, Haiyan
    Xie, Weiliang
    Zhang, Hongwei
    Mei, Haodi
    [J]. IEEE SENSORS JOURNAL, 2024, 24 (05) : 7042 - 7054
  • [6] Backdoor Attacks on Multi-Agent Reinforcement Learning-based Spectrum Management
    Zhang, Hongyi
    Liu, Mingqian
    Chen, Yunfei
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 3361 - 3365
  • [7] Multi-Agent Reinforcement Learning-Based Resource Allocation for UAV Networks
    Cui, Jingjing
    Liu, Yuanwei
    Nallanathan, Arumugam
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (02) : 729 - 743
  • [8] Multi-Agent Reinforcement Learning-Based Distributed Dynamic Spectrum Access
    Albinsaid, Hasan
    Singh, Keshav
    Biswas, Sudip
    Li, Chih-Peng
    [J]. IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (02) : 1174 - 1185
  • [9] Scalable Multi-Agent Reinforcement Learning-Based Distributed Channel Access
    Chen, Zhenyu
    Sun, Xinghua
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 453 - 458
  • [10] Multi-agent Reinforcement Learning-Based UAS Control for Logistics Environments
    Jo, Hyungeun
    Lee, Hoeun
    Jeon, Sangwoo
    Kaliappan, Vishnu Kumar
    Nguyen, Tuan Anh
    Min, Dugki
    Lee, Jae-Woo
    [J]. PROCEEDINGS OF THE 2021 ASIA-PACIFIC INTERNATIONAL SYMPOSIUM ON AEROSPACE TECHNOLOGY (APISAT 2021), VOL 2, 2023, 913 : 963 - 972