Exploiting the Fine Grain SSD Internal Parallelism for OLTP and Scientific Workloads

被引:0
|
作者
Zertal, Soraya [1 ]
机构
[1] Univ Versailles, PRiSM, 45 Av Etats Unis, F-78000 Versailles, France
关键词
SSD; Parallel IO; OLTP and scientific workloads; Simulation; Performance evaluation;
D O I
10.1109/HPCC.2014.163
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Solid State Disks (SSDs) are promising data storage devices in term of performance and energy consumption comparing to Hard Drive Disks (HDDs). They are more and more used, even in On-Line Transaction Processing (OLTP) systems and for scientific data with hard constraints on response time. Consequently, parallel execution and parallel access to data are capital to fulfil this performance requirement. The SSD internal structure provides a potential for parallel access at different levels which can be exploited to match the concurrency naturally present in both OLTP and scientific applications. In this paper, the SSD behaviour is analysed considering two degrees of internal parallelism associated to inter-Dies (degree 1) and interPlanes (degree 2) parallelisms and compared to a sequential scheme (degree 0) as a reference. The study is conducted using representative workloads for both OLTP and scientific applications. The obtained results are significant and show the important performance gain of exploiting the internal SSD parallelism (up to x44 for OLTP). The gain is less important for scientific applications due to their requests size distributions and the interleaving of read/write streams. In conjunction with priority and preemption scheduling strategies, an additional impact is observed, which can be very modest or a factor of x10 according to the context, with a significant impact only if priority is associated to preemption.
引用
下载
收藏
页码:990 / 997
页数:8
相关论文
共 50 条
  • [41] Improving Update-Intensive Workloads on Flash Disks through Exploiting Multi-Chip Parallelism
    He, Bingsheng
    Yu, Jeffrey Xu
    Zhou, Amelie Chi
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2015, 26 (01) : 152 - 162
  • [42] MARS: Exploiting Multi-Level Parallelism for DNN Workloads on Adaptive Multi-Accelerator Systems
    Shen, Guan
    Zhao, Jieru
    Wang, Zeke
    Lin, Zhe
    Ding, Wenchao
    Wu, Chentao
    Chen, Quan
    Guo, Minyi
    2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC, 2023,
  • [43] PR-SSD: Maximizing Partial Read Potential by Exploiting Compression and Channel-Level Parallelism
    Kang, Mincheol
    Lee, Wonyoung
    Kim, Jinkwon
    Kim, Soontae
    IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (03) : 772 - 785
  • [44] Characterizing Fine-Grain Parallelism on Modern Multicore Platform
    Chen, Xuhao
    Chen, Wei
    Li, Jiawen
    Zheng, Zhong
    Shen, Li
    Wang, Zhiying
    2011 IEEE 17TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2011, : 941 - 946
  • [45] REGISTER ALLOCATION, RENAMING AND THEIR IMPACT ON FINE-GRAIN PARALLELISM
    NICOLAU, A
    POTASMAN, R
    WANG, H
    LECTURE NOTES IN COMPUTER SCIENCE, 1992, 589 : 218 - 235
  • [46] ASSESSING THE BENEFITS OF FINE-GRAIN PARALLELISM IN DATAFLOW PROGRAMS
    ARVIND
    CULLER, DE
    MAA, GK
    INTERNATIONAL JOURNAL OF SUPERCOMPUTER APPLICATIONS AND HIGH PERFORMANCE COMPUTING, 1988, 2 (03): : 10 - 36
  • [47] Fine grain parallelism for discrete variable approaches to wavepacket calculations
    Bellucci, D
    Tasso, S
    Laganà, A
    COMPUTATIONAL SCIENCE-ICCS 2002, PT III, PROCEEDINGS, 2002, 2331 : 918 - 925
  • [48] Exploiting Internal Parallelism for Address Translation in Solid-State Drives
    Xie, Wei
    Chen, Yong
    Roth, Philip C.
    ACM TRANSACTIONS ON STORAGE, 2018, 14 (04)
  • [49] EXPLOITING LARGE GRAIN PARALLELISM IN A SPARSE DIRECT LINEAR-SYSTEM SOLVER
    GESCHIERE, JP
    WIJSHOFF, HAG
    PARALLEL COMPUTING, 1995, 21 (08) : 1339 - 1364
  • [50] The potential of exploiting coarse-grain task parallelism from sequential programs
    Hordijk, J
    Corporaal, H
    HIGH-PERFORMANCE COMPUTING AND NETWORKING, 1997, 1225 : 664 - 673