Intelligent Maritime Communications Enabled by Deep Reinforcement Learning

被引:0
|
作者
Li, Jiabo [1 ]
Yang, Tingting [1 ,2 ]
Feng, Hailong [1 ]
机构
[1] Dalian Maritime Univ, Nav Coll, Dalian 116026, Peoples R China
[2] Dongguan Univ Technol, Sch Elect Engn & Intelligentizat, Dongguan 523000, Peoples R China
关键词
Deep reinforcement learning; markov processes; maritime communications; software defined network; QAM;
D O I
10.1109/iccchina.2019.8855946
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Nowadays, with maritime services grows exponentially, quality of service (QoS) for data transmission has become a bottleneck that restricts the development of maritime communication. In order to solve this problem, firstly, a software-defined maritime communication framework is proposed to overcome the difficulty of communication mode in heterogeneous environment. In addition, we propose a novel data transmission scheme with the enhanced deep Q-learning algorithm in this framework, which combines depth Q-network with softmax multiple classifier, also known as S-DQN algorithm. This scheme also mentions what the purpose of the optimization is (i.e., throughput, cost, energy). In our system, markov decision processes (MDPs) are used to implement the optimal strategy for network resource scheduling. The system employs the deep Q-network to establish the mapping relationship between the acquired information and the optimal strategy, and when the input data arrives, the optimal strategy can be as fast and accurate as possible due to a large amount of data self-learning. The simulation results show that the scheme is superior to other traditional schemes in different QoS, and the effectiveness of the proposed scheme is verified.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Deep Reinforcement Learning for Intelligent Cloud Resource Management
    Zhou, Zhi
    Luo, Ke
    Chen, Xu
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [32] Intelligent PID Controller Based on Deep Reinforcement Learning
    Zhai, Yinhe
    Zhao, Qiang
    Han, Yinghua
    Wang, Jinkuan
    Zeng, Wenying
    2024 8TH INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION, ICRCA 2024, 2024, : 343 - 348
  • [33] Fuzzy Inference Enabled Deep Reinforcement Learning-Based Traffic Light Control for Intelligent Transportation System
    Kumar, Neetesh
    Rahman, Syed Shameerur
    Dhakad, Navin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (08) : 4919 - 4928
  • [34] Intelligent Roundabout Insertion using Deep Reinforcement Learning
    Capasso, Alessandro Paolo
    Bacchiani, Giulio
    Molinari, Daniele
    ICAART: PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 2, 2020, : 378 - 385
  • [35] INCdeep: Intelligent Network Coding with Deep Reinforcement Learning
    Wang, Qi
    Liu, Jianmin
    Jaffres-Runser, Katia
    Wang, Yongqing
    He, Chentao
    Liu, Cunzhuang
    Xu, Yongjun
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [36] Intelligent Control of Manipulator Based on Deep Reinforcement Learning
    Zhou, Jiangtao
    Zheng, Hua
    Zhao, Dongzhu
    Chen, Yingxue
    2021 12TH INTERNATIONAL CONFERENCE ON MECHANICAL AND AEROSPACE ENGINEERING (ICMAE), 2021, : 275 - 279
  • [37] Intelligent Cloud Resource Management with Deep Reinforcement Learning
    Zhang, Yu
    Yao, Jianguo
    Guan, Haibing
    IEEE CLOUD COMPUTING, 2017, 4 (06): : 60 - 69
  • [38] A Hybrid Deep Reinforcement Learning Algorithm for Intelligent Manipulation
    Ma, Chao
    Li, Jianfei
    Bai, Jie
    Wang, Yaobing
    Liu, Bin
    Sun, Jing
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2019, PT IV, 2019, 11743 : 367 - 377
  • [39] Intelligent IoT Connectivity: Deep Reinforcement Learning Approach
    Kwon, Minhae
    Lee, Juhyeon
    Park, Hyunggon
    IEEE SENSORS JOURNAL, 2020, 20 (05) : 2782 - 2791
  • [40] Deep Reinforcement Learning for Intelligent Transportation Systems: A Survey
    Haydari, Ammar
    Yilmaz, Yasin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (01) : 11 - 32