Smart Power Control for Quality-Driven Multi-User Video Transmissions: A Deep Reinforcement Learning Approach

被引:0
|
作者
Zhang, Ticao [1 ]
Mao, Shiwen [1 ]
机构
[1] Auburn Univ, Dept Elect & Comp Engn, Auburn, AL 36849 USA
来源
IEEE ACCESS | 2020年 / 8卷
关键词
Multi-user video transmission; multi-agent deep reinforcement learning; power control; quality of experience; LAYER RESOURCE-ALLOCATION; SCALABLE VIDEO; NETWORKS; COMMUNICATION; MANAGEMENT; FAIRNESS; SYSTEMS; ACCESS;
D O I
10.1109/ACCESS.2019.2961914
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Device-to-device (D2D) communications have been regarded as a promising technology to meet the dramatically increasing video data demand in the 5G network. In this paper, we consider the power control problem in a multi-user video transmission system. Due to the non-convex nature of the optimization problem, it is challenging to obtain an optimal strategy. In addition, many existing solutions require instantaneous channel state information (CSI) for each link, which is hard to obtain in resource-limited wireless networks. We developed a multi-agent deep reinforcement learning-based power control method, where each agent adaptively controls its transmit power based on the observed local states. The proposed method aims to maximize the average quality of received videos of all users while satisfying the quality requirement of each user. After off-line training, the method can be distributedly implemented such that all the users can achieve their target state from any initial state. Compared with conventional optimization based approach, the proposed method is model-free, does not require CSI, and is scalable to large networks.
引用
收藏
页码:611 / 622
页数:12
相关论文
共 50 条
  • [1] DEEP LEARNING BASED POWER CONTROL FOR QUALITY-DRIVEN WIRELESS VIDEO TRANSMISSIONS
    Ye, Chuang
    Gursoy, M. Cenk
    Velipasalar, Senem
    [J]. 2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 574 - 578
  • [2] Enabling Quality-Driven Scalable Video Transmission over Multi-User NOMA System
    Jiang, Xiaoda
    Lu, Hancheng
    Chen, Chang Wen
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2018), 2018, : 1961 - 1969
  • [3] Deep Reinforcement Learning for Multi-User Access Control in UAV Networks
    Cao, Yang
    Zhang, Lin
    Liang, Ying-Chang
    [J]. ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [4] Power Allocation in Multi-User Cellular Networks: Deep Reinforcement Learning Approaches
    Meng, Fan
    Chen, Peng
    Wu, Lenan
    Cheng, Julian
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (10) : 6255 - 6267
  • [5] A Deep Reinforcement Learning Approach for Point Cloud Video Transmissions
    Lin, Hai
    Zhang, Bo
    Cao, Yangjie
    Liu, Zhi
    Chen, Xianfu
    [J]. 2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,
  • [6] Delay-Aware Power Control for Downlink Multi-User MIMO via Constrained Deep Reinforcement Learning
    Tian, Chang
    Huang, Guan
    Liu, An
    Luo, Wu
    [J]. 2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [7] Deep Reinforcement Learning For Multi-User Access Control in Non-Terrestrial Networks
    Cao, Yang
    Lien, Shao-Yu
    Liang, Ying-Chang
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2021, 69 (03) : 1605 - 1619
  • [8] Quality-Driven Joint Rate and Power Adaptation for Scalable Video Transmissions Over MIMO Systems
    Chen, Xiang
    Hwang, Jenq-Neng
    Ritcey, James A.
    Lee, Chung-Nan
    Yeh, Fu-Ming
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2017, 27 (02) : 366 - 379
  • [9] Power Allocation in Multi-user Cellular Networks With Deep Q Learning Approach
    Meng, Fan
    Chen, Peng
    Wu, Lenan
    [J]. ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [10] IMAGE QUALITY-DRIVEN OCTOROTOR FLIGHT CONTROL VIA REINFORCEMENT LEARNING
    Li, Qiang
    Xu, Yunjun
    [J]. PROCEEDINGS OF THE ASME 11TH ANNUAL DYNAMIC SYSTEMS AND CONTROL CONFERENCE, 2018, VOL 3, 2018,