Multipath Communication With Deep Q-Network for Industry 4.0 Automation and Orchestration

被引:16
|
作者
Pokhrel, Shiva Raj [1 ]
Garg, Sahil [2 ]
机构
[1] Deakin Univ, Fac Sci Engn & Built Environm, Sch IT, Geelong, Vic 3220, Australia
[2] Univ Quebec, Ecole Technol Super ETS, Montreal, PQ H3C 1K3, Canada
关键词
Automation; deep Q-learning; industrial Internet of Things (IIoT); Industry; 4.0; information technology cum operation technology (IT/OT) convergence; multipath protocol design; orchestration; smart manufacturing; CONGESTION CONTROL; TCP;
D O I
10.1109/TII.2020.3000502
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we design a novel multipath communication framework for Industry 4.0 using deep Q-network [1] to achieve human-level intelligence in networking automation and orchestration. To elaborate, we first investigate the challenges and approaches in exploiting heterogeneous networks and multipath communication [e.g., using multipath transmission control protocol (MPTCP)] for the information technology cum operation technology (IT/OT) convergence in Industry 4.0. Based on the novel idea of intelligent and flexible manufacturing, we analyze the technical challenges of IT/OT convergence and then model network data traffics using MPTCP over the converged frameworks. It quantifies the adverse impact of network convergence on the performance for flexible manufacturing. We provide a few proof-of-concepts solutions; however, after a clear understanding of the tradeoffs, we discover the need for experience-driven MPTCP. The simulation result demonstrates that the proposed scheme significantly outperforms the baseline schemes.
引用
收藏
页码:2852 / 2859
页数:8
相关论文
共 50 条
  • [21] Influence on Learning of Various Conditions in Deep Q-Network
    Niitsuma, Jun
    Osana, Yuko
    2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2017, : 1932 - 1935
  • [22] A Framework of Hierarchical Deep Q-Network for Portfolio Management
    Gao, Yuan
    Gao, Ziming
    Hu, Yi
    Song, Sifan
    Jiang, Zhengyong
    Su, Jionglong
    ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 132 - 140
  • [23] Manufacturing Resource Scheduling Based on Deep Q-Network
    ZHANG Yufei
    ZOU Yuanhao
    ZHAO Xiaodong
    Wuhan University Journal of Natural Sciences, 2022, 27 (06) : 531 - 538
  • [24] An Improved Deep Q-Network with Convolution Block Attention
    Li, Shilin
    Qu, Junsuo
    Yang, Dan
    PROCEEDINGS OF 2022 INTERNATIONAL CONFERENCE ON AUTONOMOUS UNMANNED SYSTEMS, ICAUS 2022, 2023, 1010 : 2921 - 2929
  • [25] Inhomogeneous deep Q-network for time sensitive applications
    Chen, Xu
    Wang, Jun
    ARTIFICIAL INTELLIGENCE, 2022, 312
  • [26] Dynamic Parallel Machine Scheduling With Deep Q-Network
    Liu, Chien-Liang
    Tseng, Chun-Jan
    Huang, Tzu-Hsuan
    Wang, Jhih-Wun
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (11): : 6792 - 6804
  • [27] Multiagent Learning and Coordination with Clustered Deep Q-Network
    Pageaud, Simon
    Deslandres, Veronique
    Lehoux, Vassilissa
    Hassas, Salima
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2156 - 2158
  • [28] Deep Reinforcement Learning. Case Study: Deep Q-Network
    Vrejoiu, Mihnea Horia
    ROMANIAN JOURNAL OF INFORMATION TECHNOLOGY AND AUTOMATIC CONTROL-REVISTA ROMANA DE INFORMATICA SI AUTOMATICA, 2019, 29 (03): : 65 - 78
  • [29] Deep Reinforcement Learning Pairs Trading with a Double Deep Q-Network
    Brim, Andrew
    2020 10TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2020, : 222 - 227
  • [30] Deep Q-Network for Radar Task-Scheduling Problem
    George, Taylor
    Wagner, Kevin
    Rademacher, Paul
    2022 IEEE RADAR CONFERENCE (RADARCONF'22), 2022,