Reinforcement Learning-Assisted Garbage Collection to Mitigate Long-Tail Latency in SSD

被引:42
|
作者
Kang, Wonkyung [1 ]
Shin, Dongkun [2 ]
Yoo, Sungjoo [1 ]
机构
[1] Seoul Natl Univ, Dept Comp Sci & Engn, 1 Gwanak Ro, Seoul 08826, South Korea
[2] Sungkyunkwan Univ, Dept Software, 2066 Seobu Ro, Suwon 16419, Gyeonggi Do, South Korea
基金
新加坡国家研究基金会;
关键词
Flash storage system; SSD; garbage collection; long-tail latency; reinforcement learning;
D O I
10.1145/3126537
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
NAND flash memory is widely used in various systems, ranging from real-time embedded systems to enterprise server systems. Because the flash memory has erase-before-write characteristics, we need flash-memory management methods, i.e., address translation and garbage collection. In particular, garbage collection (GC) incurs long-tail latency, e.g., 100 times higher latency than the average latency at the 99th percentile. Thus, real-time and quality-critical systems fail to meet the given requirements such as deadline and QoS constraints. In this study, we propose a novel method of GC based on reinforcement learning. The objective is to reduce the long-tail latency by exploiting the idle time in the storage system. To improve the efficiency of the reinforcement learning-assisted GC scheme, we present new optimization methods that exploit fine-grained GC to further reduce the long-tail latency. The experimental results with real workloads show that our technique significantly reduces the long-tail latency by 29-36% at the 99.99th percentile compared to state-of-the-art schemes.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] Iterative Learning with Extra and Inner Knowledge for Long-tail Dynamic Scene Graph Generation
    Li, Yiming
    Yang, Xiaoshan
    Xu, Changsheng
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4707 - 4715
  • [42] Deep Learning-Assisted Online Task Offloading for Latency Minimization in Heterogeneous Mobile Edge
    Liu, Yu
    Mao, Yingling
    Liu, Zhenhua
    Yang, Yuanyuan
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 4062 - 4075
  • [43] A quantum information theoretic analysis of reinforcement learning-assisted quantum architecture search
    Sadhu, Abhishek
    Sarkar, Aritra
    Kundu, Akash
    QUANTUM MACHINE INTELLIGENCE, 2024, 6 (02)
  • [44] A Multiagent Reinforcement Learning-Assisted Cache Cleaning Scheme for DM-SMR
    Shen, Zhaoyan
    Yang, Yuhan
    Pan, Yungang
    Zhang, Yuhao
    Jia, Zhiping
    Cai, Xiaojun
    Li, Bingzhe
    Shao, Zili
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (08) : 2500 - 2513
  • [45] Reinforcement Learning-Assisted Composite Adaptive Control for Time-Varying Parameters
    Kim, Seong-hun
    Lee, Hanna
    Kim, Youdan
    IFAC PAPERSONLINE, 2020, 53 (02): : 9515 - 9520
  • [46] Machine Learning-Assisted Sampling of SERS Substrates Improves Data Collection Efficiency
    Rojalin, Tatu
    Antonio, Dexter
    Kulkarni, Ambarish
    Carney, Randy P.
    APPLIED SPECTROSCOPY, 2022, 76 (04) : 485 - 495
  • [47] Robust Feature Learning and Global Variance-Driven Classifier Alignment for Long-Tail Class Incremental Learning
    Kalla, Jayateja
    Biswas, Soma
    arXiv, 2023,
  • [48] Multi-Agent Semi-Siamese Training for Long-Tail and Shallow Face Learning
    Tai, Yichun
    Shi, Hailin
    Zeng, Dan
    Du, Hang
    Hu, Yibo
    Zhang, Zicheng
    Zhang, Zhijiang
    Mei, Tao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (06)
  • [49] Meta-learning Advisor Networks for Long-tail and Noisy Labels in Social Image Classification
    Ricci, Simone
    Uricchio, Tiberio
    Del Bimbo, Alberto
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (05)
  • [50] Robust Feature Learning and Global Variance-Driven Classifier Alignment for Long-Tail Class Incremental Learning
    Indian Institute of Science, Department of Electrical Engineering, Bangalore, India
    Proc. - IEEE Winter Conf. Appl. Comput. Vis., WACV, (32-41):