A Deep Reinforcement Learning Based Approach for Cost- and Energy-Aware Multi-Flow Mobile Data Offloading

被引:44
|
作者
Zhang, Cheng [1 ]
Liu, Zhi [2 ]
Gu, Bo [3 ]
Yamori, Kyoko [4 ,5 ]
Tanaka, Yoshiaki [5 ,6 ]
机构
[1] Waseda Univ, Dept Comp Sci & Commun Engn, Tokyo 1690072, Japan
[2] Shizuoka Univ, Dept Math & Syst Engn, Hamamatsu, Shizuoka 4328561, Japan
[3] Kogakuin Univ, Dept Informat & Commun Engn, Tokyo 1920015, Japan
[4] Asahi Univ, Dept Management Informat, Mizuho 5010296, Japan
[5] Waseda Univ, Global Informat & Telecommun Inst, Tokyo 1698555, Japan
[6] Waseda Univ, Dept Commun & Comp Engn, Tokyo 1698555, Japan
关键词
wireless LAN; multiple-flow; mobile data offloading; reinforcement learning; deep Q-network; DQN;
D O I
10.1587/transcom.2017CQP0014
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the rapid increase in demand for mobile data, mobile network operators are trying to expand wireless network capacity by deploying wireless local area network (LAN) hotspots on to which they can offload their mobile traffic. However, these network-centric methods usually do not fulfill the interests of mobile users (MUs). Taking into consideration many issues such as different applications' deadlines, monetary cost and energy consumption, how the MU decides whether to offload their traffic to a complementary wireless LAN is an important issue. Previous studies assume the MU's mobility pattern is known in advance, which is not always true. In this paper, we study the MU's policy to minimize his monetary cost and energy consumption without known MU mobility pattern. We propose to use a kind of reinforcement learning technique called deep Q-network (DQN) for MU to learn the optimal offloading policy from past experiences. In the proposed DQN based offloading algorithm, MU's mobility pattern is no longer needed. Furthermore, MU's state of remaining data is directly fed into the convolution neural network in DQN without discretization. Therefore, not only does the discretization error present in previous work disappear, but also it makes the proposed algorithm has the ability to generalize the past experiences, which is especially effective when the number of states is large. Extensive simulations are conducted to validate our proposed offloading algorithms.
引用
收藏
页码:1625 / 1634
页数:10
相关论文
共 50 条
  • [1] A Reinforcement Learning Approach for Cost- and Energy-Aware Mobile Data Offloading
    Zhang, Cheng
    Gu, Bo
    Liu, Zhi
    Yamori, Kyoko
    Tanaka, Yoshiaki
    [J]. 2016 18TH ASIA-PACIFIC NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM (APNOMS), 2016,
  • [2] Cost- and Energy-Aware Multi-Flow Mobile Data Offloading Using Markov Decision Process
    Zhang, Cheng
    Gu, Bo
    Liu, Zhi
    Yamori, Kyoko
    Tanaka, Yoshiaki
    [J]. IEICE TRANSACTIONS ON COMMUNICATIONS, 2018, E101B (03) : 657 - 666
  • [3] Cost- and Energy-Aware Multi-Flow Mobile Data Offloading under Time Dependent Pricing
    Zhang, Cheng
    Gu, Bo
    Liu, Zhi
    Yamori, Kyoko
    Tanaka, Yoshiaki
    [J]. 2017 13TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT (CNSM), 2017,
  • [4] Energy-Aware Multi-Server Mobile Edge Computing: A Deep Reinforcement Learning Approach
    Naderializadeh, Navid
    Hashemi, Morteza
    [J]. CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 383 - 387
  • [5] An energy-aware traffic offloading approach based on deep learning and optimization in massive MIMO
    Farakte, A. B.
    Sridhar, K. P.
    Rasale, M. B.
    [J]. TELECOMMUNICATION SYSTEMS, 2024, 87 (02) : 301 - 328
  • [6] Deep Reinforcement Learning Empowers Wireless Powered Mobile Edge Computing: Towards Energy-Aware Online Offloading
    Jiao, Xianlong
    Wang, Yating
    Guo, Songtao
    Zhang, Hong
    Dai, Haipeng
    Li, Mingyan
    Zhou, Pengzhan
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (09) : 5214 - 5227
  • [7] Adaptive Task Offloading for Mobile Aware Applications Based on Deep Reinforcement Learning
    Liu, Xianming
    Zhang, Chaokun
    He, Shen
    [J]. 2022 IEEE 19TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2022), 2022, : 33 - 39
  • [8] Deep Reinforcement Learning-Based Method of Mobile Data Offloading
    Mochizuki, Daisuke
    Abiko, Yu
    Mineno, Hiroshi
    Saito, Takato
    Ikeda, Daizo
    Katagiri, Masaji
    [J]. 2018 ELEVENTH INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND UBIQUITOUS NETWORK (ICMU 2018), 2018,
  • [9] Energy-aware offloading based on priority in mobile cloud computing
    Hao, Yongsheng
    Cao, Jie
    Wang, Qi
    Ma, Tinghuai
    [J]. SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2021, 31
  • [10] Energy-Aware Opportunistic Mobile Data Offloading for Users in Urban Environments
    Kouyoumdjieva, Sylvia T.
    Karlsson, Gunnar
    [J]. 2015 IFIP NETWORKING CONFERENCE (IFIP NETWORKING), 2015,