Reinforcement Learning applied to Network Synchronization Systems

被引:1
|
作者
Destro, Alessandro [1 ]
Giorgi, Giada [1 ]
机构
[1] Univ Padua, Dept Informat Engn, Padua, Italy
关键词
reinforcement learning; synchronization system; clock servo; IEEE; 1588; precision time protocol;
D O I
10.1109/MN55117.2022.9887533
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The design of suitable clock servo is a well-known problem in the context of network-based synchronization systems. Several approaches can be found in the current literature, typically based on PI-controllers or Kalman filtering. These methods require a thorough knowledge of the environment, i.e. clock model, stability parameters, temperature variations, network traffic load, traffic profile and so on. This a-priori knowledge is required to optimize the servo parameters, such as PI constants or transition matrices in a Kalman filter. In this paper we propose instead a clock servo based on the recent Reinforcement Learning approach. In this case a self-learning algorithm based on a deep-Q network learns how to synchronize a local clock only from experience and by exploiting a limited set of predefined actions. Encouraging preliminary results reported in this paper represent a first step to explore the potentiality of the reinforcement learning in synchronization systems typically characterized by an initial lack of knowledge or by a great environmental variability.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Graph Neural Network Reinforcement Learning for Autonomous Mobility-on-Demand Systems
    Gammelli, Daniele
    Yang, Kaidi
    Harrison, James
    Rodrigues, Filipe
    Pereira, Francisco C.
    Pavone, Marco
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2996 - 3003
  • [42] Reinforcement Learning in BitTorrent Systems
    Izhak-Ratzin, Rafit
    Park, Hyunggon
    van der Schaar, Mihaela
    2011 PROCEEDINGS IEEE INFOCOM, 2011, : 406 - 410
  • [43] Hierarchical Optimal Synchronization for Linear Systems via Reinforcement Learning: A Stackelberg-Nash Game Perspective
    Li, Man
    Qin, Jiahu
    Ma, Qichao
    Zheng, Wei Xing
    Kang, Yu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (04) : 1600 - 1611
  • [44] Leader-Follower Output Synchronization of Linear Heterogeneous Systems With Active Leader Using Reinforcement Learning
    Yang, Yongliang
    Modares, Hamidreza
    Wunsch, Donald C., II
    Yin, Yixin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (06) : 2139 - 2153
  • [45] Off-policy Reinforcement Learning for Distributed Output Synchronization of Linear Multi-agent Systems
    Kiumarsi, Bahare
    Lewis, Frank L.
    2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017, : 1877 - 1884
  • [46] Data-Based Optimal Synchronization of Heterogeneous Multiagent Systems in Graphical Games via Reinforcement Learning
    Xiong, Chunping
    Ma, Qian
    Guo, Jian
    Lewis, Frank L.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (11) : 1 - 9
  • [47] Optimal Synchronization Control of Multiagent Systems With Input Saturation via Off-Policy Reinforcement Learning
    Qin, Jiahu
    Li, Man
    Shi, Yang
    Ma, Qichao
    Zheng, Wei Xing
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (01) : 85 - 96
  • [48] Joint VNF Deployment and Information Synchronization in Digital Twin Driven Network Slicing Via Deep Reinforcement Learning
    Tang L.
    Wang L.
    Zhang H.
    Du Y.
    Fang D.
    Chen Q.
    IEEE Transactions on Vehicular Technology, 2024, 73 (11) : 1 - 16
  • [49] Reinforcement Learning for Adaptive Network Routing
    Desai, Rahul
    Patil, B. P.
    2014 INTERNATIONAL CONFERENCE ON COMPUTING FOR SUSTAINABLE GLOBAL DEVELOPMENT (INDIACOM), 2014, : 815 - 818
  • [50] Deep Reinforcement Learning for Adaptive Learning Systems
    Li, Xiao
    Xu, Hanchen
    Zhang, Jinming
    Chang, Hua-hua
    JOURNAL OF EDUCATIONAL AND BEHAVIORAL STATISTICS, 2023, 48 (02) : 220 - 243