QLAR: A Q-Learning based Adaptive Routing for MANETs

被引:0
|
作者
Serhani, Abdellatif [1 ]
Naja, Najib [1 ]
Jamali, Abdellah [2 ]
机构
[1] Univ Mohammed 5, INPT RAI2S, Rabat, Morocco
[2] Univ Hassan I, EST, RI2M, Berrechid, Morocco
关键词
MANETs; Reinforcement Learning; Q-Learning; Adaptive Routing; Optimized Link State Routing; ETX; AD-HOC NETWORKS;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Mobile Ad-hoc Networks are highly reconfigurable networks of mobile nodes which communicate by wireless links. The main issues in MANETs include the mobility of the network nodes, energy limitations and bandwidth. Thus, routing protocols should explicitly consider network changes into the algorithm design. In order to support service requirements of multimedia and real-time applications, the routing protocol must provide Quality of Service (QoS) in terms of packets loss and average End-to-End Delay (ETED). This work proposes a Q-Learning based Adaptive Routing model (QLAR), developed via Reinforcement Learning (RL) techniques, which has the ability to detect the level of mobility at different points of time so that each individual node can update routing metric accordingly. The proposed protocol introduces: (i) new model, developed via Q-Learning technique, to detect the level of mobility at each node in the network; (ii) a new metric, called Qmetric, which account for the static and dynamic routing metrics, and which are combined and updated to the changing network topologies. The proposed metric and routing model in this paper are deployed on the Optimized Link State Routing (OLSR) protocol. Extensive simulations validate the effectiveness of the proposed model, through comparisons with the standard OLSR protocols.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] An adaptive architecture for modular Q-learning
    Kohri, T
    Matsubayashi, K
    Tokoro, M
    [J]. IJCAI-97 - PROCEEDINGS OF THE FIFTEENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOLS 1 AND 2, 1997, : 820 - 825
  • [22] Adaptive moving average Q-learning
    Tan, Tao
    Xie, Hong
    Xia, Yunni
    Shi, Xiaoyu
    Shang, Mingsheng
    [J]. KNOWLEDGE AND INFORMATION SYSTEMS, 2024, : 7389 - 7417
  • [23] Fuzzy Q-Learning with an Adaptive Representation
    Waldock, A.
    Carse, B.
    [J]. 2008 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-5, 2008, : 720 - +
  • [24] Congestion Prevention Mechanism Based on Q-learning for Efficient Routing in SDN
    Kim, Seonhyeok
    Son, Jaehyeok
    Talukder, Ashis
    Hong, Choong Seon
    [J]. 2016 INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN), 2016, : 124 - 128
  • [25] Forecasting on Trading: A Parameter Adaptive Framework Based on Q-learning
    Chen, Chao
    Li, Yelin
    Bu, Hui
    Wu, Junjie
    Xiong, Zhang
    [J]. 2018 15TH INTERNATIONAL CONFERENCE ON SERVICE SYSTEMS AND SERVICE MANAGEMENT (ICSSSM), 2018,
  • [26] Topology-Aware Resilient Routing Protocol for FANETs: An Adaptive Q-Learning Approach
    Cui, Yanpeng
    Zhang, Qixun
    Feng, Zhiyong
    Wei, Zhiqing
    Shi, Ce
    Yang, Heng
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (19) : 18632 - 18649
  • [27] Adaptive packet scheduling in IoT environment based on Q-learning
    Kim, Donghyun
    Lee, Taeho
    Kim, Sejun
    Lee, Byungjun
    Youn, Hee Yong
    [J]. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2020, 11 (06) : 2225 - 2235
  • [28] Goal evolution based on adaptive Q-learning for intelligent agent
    Kuo, Jong Yih
    Tsai, Ming Lan
    Hsueh, Nien Lin
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-6, PROCEEDINGS, 2006, : 434 - +
  • [29] Adaptive packet scheduling in IoT environment based on Q-learning
    Donghyun Kim
    Taeho Lee
    Sejun Kim
    Byungjun Lee
    Hee Yong Youn
    [J]. Journal of Ambient Intelligence and Humanized Computing, 2020, 11 : 2225 - 2235
  • [30] Q-LEARNING BASED CONTROL ALGORITHM FOR HTTP ADAPTIVE STREAMING
    Martin, Virginia
    Cabrera, Julian
    Garcia, Narciso
    [J]. 2015 VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2015,