SCART: Predicting STT-RAM Cache Retention Times Using Machine Learning

被引:0
|
作者
Gajaria, Dhruv [1 ]
Kuan, Kyle [1 ]
Adegbija, Tosiron [1 ]
机构
[1] Univ Arizona, Dept Elect & Comp Engn, Tucson, AZ 85721 USA
基金
美国国家科学基金会;
关键词
Spin-Transfer Torque RAM (STT-RAM) cache; configurable memory; low-power embedded systems; adaptable hardware; retention time; PERFORMANCE;
D O I
10.1109/igsc48788.2019.8957182
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Prior studies have shown that the retention time of the non-volatile spin-transfer torque RAM (STT-RAM) can be relaxed in order to reduce STT-RAM's write energy and latency. However, since different applications may require different retention times, STT-RAM retention times must be critically explored to satisfy various applications' needs. This process can be challenging due to exploration overhead, and exacerbated by the fact that STT-RAM caches are emerging and are not readily available for design time exploration. This paper explores using known and easily obtainable statistics (e.g., SRAM statistics) to predict the appropriate STT-RAM retention times, in order to minimize exploration overhead. We propose an STT-RAM Cache Retention Time (SCART) model, which utilizes machine learning to enable design time or runtime prediction of right-provisioned STT-RAM retention times for latency or energy optimization. Experimental results show that, on average, SCART can reduce the latency and energy by 20.34% and 29.12%, respectively, compared to a homogeneous retention time while reducing the exploration overheads by 52.58% compared to prior work.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] LARS: Logically Adaptable Retention Time STT-RAM Cache for Embedded Systems
    Kuan, Kyle
    Adegbija, Tosiron
    [J]. PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2018, : 461 - 466
  • [2] Multi Retention Level STT-RAM Cache Designs with a Dynamic Refresh Scheme
    Sun, Zhenyu
    Bi, Xiuyuan
    Li, Hai
    Wong, Weng-Fai
    Ong, Zhong-Liang
    Zhu, Xiaochun
    Wu, Wenqing
    [J]. PROCEEDINGS OF THE 2011 44TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO 44), 2011, : 329 - 338
  • [3] Reinforcement Learning Based Refresh Optimized Volatile STT-RAM Cache
    Suman, Shashank
    Kapoor, Hemangee K.
    [J]. 2020 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2020), 2020, : 222 - 227
  • [4] STT-RAM Cache Hierarchy With Multiretention MTJ Designs
    Sun, Zhenyu
    Bi, Xiuyuan
    Li, Hai
    Wong, Weng-Fai
    Zhu, Xiaochun
    [J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2014, 22 (06) : 1281 - 1293
  • [5] A Novel Hybrid Last Level Cache Based on Multi-retention STT-RAM Cells
    Zhang, Hongguang
    Zhang, Minxuan
    Zhao, Zhenyu
    Tian, Shuo
    [J]. ADVANCED COMPUTER ARCHITECTURE, ACA 2016, 2016, 626 : 28 - 39
  • [6] TEEMO: Temperature Aware Energy Efficient Multi-Retention STT-RAM Cache Architecture
    Agarwal, Sukarn
    Chakraborty, Shounak
    Sjalander, Magnus
    [J]. PROCEEDINGS 2024 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, IPDPS 2024, 2024, : 852 - 864
  • [7] Architecting the Last-Level Cache for GPUs using STT-RAM Technology
    Samavatian, Mohammad Hossein
    Arjomand, Mohammad
    Bashizade, Ramin
    Sarbazi-Azad, Hamid
    [J]. ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2015, 20 (04)
  • [8] Cache Coherence Enabled Adaptive Refresh for Volatile STT-RAM
    Li, Jianhua
    Shi, Liang
    Li, Qing'an
    Xue, Chun Jason
    Chen, Yiran
    Xu, Yinlong
    [J]. DESIGN, AUTOMATION & TEST IN EUROPE, 2013, : 1247 - 1250
  • [9] Prediction Hybrid Cache: An Energy-Efficient STT-RAM Cache Architecture
    Ahn, Junwhan
    Yoo, Sungjoo
    Choi, Kiyoung
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2016, 65 (03) : 940 - 951
  • [10] An Efficient STT-RAM Last Level Cache Architecture for GPUs
    Samavatian, Mohammad Hossein
    Abbasitabar, Hamed
    Arjomand, Mohammad
    Sarbazi-Azad, Hamid
    [J]. 2014 51ST ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2014,