TTL-Based Cache Utility Maximization Using Deep Reinforcement Learning

被引:0
|
作者
Cho, Chunglae [1 ]
Shin, Seungjae [1 ]
Jeon, Hongseok [1 ]
Yoon, Seunghyun [1 ]
机构
[1] Elect & Telecommun Res Inst, Daejeon, South Korea
关键词
caching; utility maximization; deep reinforcement learning; non-stationary traffic;
D O I
10.1109/GLOBECOM46510.2021.9685845
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Utility-driven caching opened up a new design opportunity for caching algorithms by modeling the admission and eviction control as a utility maximization process with essential support for service differentiation. Nevertheless, there is still to go in terms of adaptability to changing environment. Slow convergence to an optimal state may degrade actual user-experienced utility, which gets even worse in non-stationary scenarios where cache control should be adaptive to time-varying content request traffic. This paper proposes to exploit deep reinforcement learning (DRL) to enhance the adaptability of utility-driven time-to-live (TTL)-based caching. Employing DRL with long short-term memory helps a caching agent learn how it adapts to the temporal correlation of content popularities to shorten the transient-state before the optimal steady-state. In addition, we elaborately design the state and action spaces of DRL to overcome the curse of dimensionality, which is one of the most frequently raised issues in machine learning-based approaches. Experimental results show that policies trained by DRL can outperform the conventional utility-driven caching algorithm under some non-stationary environments where content request traffic changes rapidly.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] A Distributed Cache Mechanism of HDFS to Improve Learning Performance for Deep Reinforcement Learning
    Gao, Yongqiang
    Deng, Shunyi
    Li, Zhenkun
    2022 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING, ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM, 2022, : 280 - 285
  • [32] Persistent coverage of UAVs based on deep reinforcement learning with wonderful life utility
    Sun, Zhaomei
    Wang, Nan
    Lin, Hong
    Zhou, Xiaojun
    NEUROCOMPUTING, 2023, 521 : 137 - 145
  • [33] Packet Delivery Maximization Using Deep Reinforcement Learning-Based Transmission Scheduling for Industrial Cognitive Radio Systems
    Thanh, Pham Duy
    Tran Nhut Khai Hoan
    Giang, Hoang Thi Huong
    Koo, Insoo
    IEEE ACCESS, 2021, 9 : 146492 - 146508
  • [34] Cache-Aided MEC for IoT: Resource Allocation Using Deep Graph Reinforcement Learning
    Wang, Dan
    Bai, Yalu
    Huang, Gang
    Song, Bin
    Yu, F. Richard
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (13) : 11486 - 11496
  • [35] Deep Reinforcement Learning-Based Approach to Tackle Topic-Aware Influence Maximization
    Shan Tian
    Songsong Mo
    Liwei Wang
    Zhiyong Peng
    Data Science and Engineering, 2020, 5 : 1 - 11
  • [36] Deep Reinforcement Learning-Based Approach to Tackle Topic-Aware Influence Maximization
    Tian, Shan
    Mo, Songsong
    Wang, Liwei
    Peng, Zhiyong
    DATA SCIENCE AND ENGINEERING, 2020, 5 (01) : 1 - 11
  • [37] Deep Reinforcement Learning for Energy Efficiency Maximization in SWIPT-Based Over-the-Air Federated Learning
    Zhang, Xinran
    Tian, Hui
    Ni, Wanli
    Yang, Zhaohui
    Sun, Mengying
    IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2024, 8 (01): : 525 - 541
  • [38] Influence Maximization in Dynamic Networks Using Reinforcement Learning
    Dizaji S.H.S.
    Patil K.
    Avrachenkov K.
    SN Computer Science, 5 (1)
  • [39] Energy Efficiency Maximization in RISs-Assisted UAVs-Based Edge Computing Network Using Deep Reinforcement Learning
    Luo, Chuanwen
    Zhang, Jian
    Guo, Jianxiong
    Hong, Yi
    Chen, Zhibo
    Gu, Shuyang
    BIG DATA MINING AND ANALYTICS, 2024, 7 (04): : 1065 - 1083
  • [40] Battery Scheduling Control of a Microgrid Trading with Utility Grid Using Deep Reinforcement Learning
    Mohamed, Mahmoud
    Tsuji, Takao
    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, 2023, 18 (05) : 665 - 677