TTL-Based Cache Utility Maximization Using Deep Reinforcement Learning

被引:0
|
作者
Cho, Chunglae [1 ]
Shin, Seungjae [1 ]
Jeon, Hongseok [1 ]
Yoon, Seunghyun [1 ]
机构
[1] Elect & Telecommun Res Inst, Daejeon, South Korea
关键词
caching; utility maximization; deep reinforcement learning; non-stationary traffic;
D O I
10.1109/GLOBECOM46510.2021.9685845
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Utility-driven caching opened up a new design opportunity for caching algorithms by modeling the admission and eviction control as a utility maximization process with essential support for service differentiation. Nevertheless, there is still to go in terms of adaptability to changing environment. Slow convergence to an optimal state may degrade actual user-experienced utility, which gets even worse in non-stationary scenarios where cache control should be adaptive to time-varying content request traffic. This paper proposes to exploit deep reinforcement learning (DRL) to enhance the adaptability of utility-driven time-to-live (TTL)-based caching. Employing DRL with long short-term memory helps a caching agent learn how it adapts to the temporal correlation of content popularities to shorten the transient-state before the optimal steady-state. In addition, we elaborately design the state and action spaces of DRL to overcome the curse of dimensionality, which is one of the most frequently raised issues in machine learning-based approaches. Experimental results show that policies trained by DRL can outperform the conventional utility-driven caching algorithm under some non-stationary environments where content request traffic changes rapidly.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] DeepWalk Based Influence Maximization (DWIM): Influence Maximization Using Deep Learning
    Sonia
    Sharma, Kapil
    Bajaj, Monika
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 35 (01): : 1087 - 1101
  • [22] Throughput Maximization for Polar Coded IR-HARQ Using Deep Reinforcement Learning
    Qiu, Gengxin
    Zhao, Ming-Min
    Lei, Ming
    Zhao, Min-jian
    2020 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (IEEE PIMRC), 2020,
  • [23] Using expectation-maximization for reinforcement learning
    Dayan, P
    Hinton, GE
    NEURAL COMPUTATION, 1997, 9 (02) : 271 - 278
  • [24] Federation-Based Deep Reinforcement Learning Cooperative Cache in Vehicular Edge Networks
    Wu, Honghai
    Jin, Jichong
    Ma, Huahong
    Xing, Ling
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (02) : 2550 - 2560
  • [25] Deep Reinforcement Learning in Cache-Aided MEC Networks
    Yang, Zhong
    Liu, Yuanwei
    Chen, Yue
    Tyson, Gareth
    ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [26] Deep Reinforcement Learning Based Beamforming for Throughput Maximization in Ultra-Dense Networks
    Yu, Huihan
    Xiao, Yang
    Wu, Jiawei
    He, Zilong
    Liu, Fang
    Liu, Jun
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 1021 - 1026
  • [27] Deep Reinforcement Learning Based Admission Control for Throughput Maximization in Mobile Edge Computing
    Zhou, Yitong
    Ye, Qiang
    Huang, Hui
    Du, Hongwei
    2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,
  • [28] Deep Reinforcement Learning-Based Social Welfare Maximization for Collaborative Edge Computing
    He, Xingqiu
    You, Chaoqun
    Shen, Yuhang
    Zhu, Hongxi
    Dai, Yueyue
    Lu, Yunlong
    2024 IEEE INTERNATIONAL WORKSHOP ON RADIO FREQUENCY AND ANTENNA TECHNOLOGIES, IWRF&AT 2024, 2024, : 162 - 167
  • [29] Enhanced Multi-hop Operation Using Hybrid Optoelectronic Router with TTL-based Selective FEC
    Nakahara, Tatsushi
    Suzaki, Yasumasa
    Urata, Ryohei
    Segawa, Toru
    Ishikawa, Hiroshi
    Takahashi, Ryo
    2011 37TH EUROPEAN CONFERENCE AND EXHIBITION ON OPTICAL COMMUNICATIONS (ECOC 2011), 2011,
  • [30] Cognitive Radio Network Throughput Maximization with Deep Reinforcement Learning
    Ong, Kevin Shen Hoong
    Zhang, Yang
    Niyato, Dusit
    2019 IEEE 90TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2019-FALL), 2019,