Age of Information Aware VNF Scheduling in Industrial IoT Using Deep Reinforcement Learning

被引:42
|
作者
Akbari, Mohammad [1 ]
Abedi, Mohammad Reza [2 ]
Joda, Roghayeh [1 ,3 ]
Pourghasemian, Mohsen [2 ]
Mokari, Nader [2 ]
Erol-Kantarci, Melike [3 ]
机构
[1] ICT Res Inst, Commun Dept, Tehran 1439955471, Iran
[2] Tarbiat Modares Univ, Fac Elect & Comp Engn ECE, Tehran 14115111, Iran
[3] Univ Ottawa, Sch Elect Engn & Comp Sci, Ottawa, ON K1N 6N5, Canada
基金
美国国家科学基金会;
关键词
Industrial Internet of Things; Delays; Measurement; Information age; Reinforcement learning; Quality of service; Resource management; network function virtualization; age of information; deep reinforcement learning; compound actions; multi-agent; RESOURCE-ALLOCATION; INTERNET; PLACEMENT; THINGS;
D O I
10.1109/JSAC.2021.3087264
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In delay-sensitive industrial internet of things (IIoT) applications, the age of information (AoI) is employed to characterize the freshness of information. Meanwhile, the emerging network function virtualization provides flexibility and agility for service providers to deliver a given network service using a sequence of virtual network functions (VNFs). However, suitable VNF placement and scheduling in these schemes is NP-hard and finding a globally optimal solution by traditional approaches is complex. Recently, deep reinforcement learning (DRL) has appeared as a viable way to solve such problems. In this paper, we first utilize single agent low-complex compound action actor-critic RL to cover both discrete and continuous actions and jointly minimize VNF cost and AoI in terms of network resources under end-to-end Quality of Service constraints. To surmount the single-agent capacity limitation for learning, we then extend our solution to a multi-agent DRL scheme in which agents collaborate with each other. Simulation results demonstrate that single-agent schemes significantly outperform the greedy algorithm in terms of average network cost and AoI. Moreover, multi-agent solution decreases the average cost by dividing the tasks between the agents. However, it needs more iterations to be learned due to the requirement on the agents' collaboration.
引用
收藏
页码:2487 / 2500
页数:14
相关论文
共 50 条
  • [21] Cost-aware job scheduling for cloud instances using deep reinforcement learning
    Feng Cheng
    Yifeng Huang
    Bhavana Tanpure
    Pawan Sawalani
    Long Cheng
    Cong Liu
    [J]. Cluster Computing, 2022, 25 : 619 - 631
  • [22] Cost-aware job scheduling for cloud inutances using deep reinforcement learning
    Cheng, Feng
    Huang, Yifeng
    Tanpure, Bhavana
    Sawalani, Pawan
    Cheng, Long
    Liu, Cong
    [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2022, 25 (01): : 619 - 631
  • [23] Dynamic Resource Aware VNF Placement with Deep Reinforcement Learning for 5G Networks
    Dalgkitsis, Anestis
    Mekikis, Prodromos-Vasileios
    Antonopoulos, Angelos
    Kormentzas, Georgios
    Verikoukis, Christos
    [J]. 2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [24] Intelligent VNF Orchestration and Flow Scheduling via Model-Assisted Deep Reinforcement Learning
    Gu, Lin
    Zeng, Deze
    Li, Wei
    Guo, Song
    Zomaya, Albert Y.
    Jin, Hai
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (02) : 279 - 291
  • [25] Deep Reinforcement Learning for Scheduling Uplink IoT Traffic with Strict Deadlines
    Robaglia, Benoit-Marie
    Destounis, Apostolos
    Coupechoux, Marceau
    Tsilimantos, Dimitrios
    [J]. 2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [26] Mobile Energy Transmitter Scheduling in Energy Harvesting IoT Networks using Deep Reinforcement Learning
    Singh, Aditya
    Rustagi, Rahul
    Redhu, Surender
    Hegde, Rajesh M.
    [J]. 2022 IEEE 8TH WORLD FORUM ON INTERNET OF THINGS, WF-IOT, 2022,
  • [27] Deep-Reinforcement-Learning-Based Age-of-Information-Aware Low-Power Active Queue Management for IoT Sensor Networks
    Song, Taewon
    Kyung, Yeunwoong
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 16700 - 16709
  • [28] Buffer-aware Wireless Scheduling based on Deep Reinforcement Learning
    Xu, Chen
    Wang, Jian
    Yu, Tianhang
    Kong, Chuili
    Huangfu, Yourui
    Li, Rong
    Ge, Yiqun
    Wang, Jun
    [J]. 2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2020,
  • [29] DEEPCAS: A Deep Reinforcement Learning Algorithm for Control-Aware Scheduling
    Demirel, Burak
    Ramaswamy, Arunselvan
    Quevedo, Daniel E.
    Karl, Holger
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2018, 2 (04): : 737 - 742
  • [30] Delay-aware Cellular Traffic Scheduling with Deep Reinforcement Learning
    Zhang, Ticao
    Shen, Shuyi
    Mao, Shiwen
    Chang, Gee-Kung
    [J]. 2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,