Continuous Performance Benchmarking Framework for ROOT

被引:0
|
作者
Shadura, Oksana [1 ]
Vassilev, Vassil [2 ]
Bockelman, Brian Paul [1 ]
机构
[1] Univ Nebraska, 1400 R St, Lincoln, NE 68588 USA
[2] Princeton Univ, Princeton, NJ 08544 USA
基金
美国国家科学基金会;
关键词
D O I
10.1051/epjconf/201921405003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Foundational software libraries such as ROOT are under intense pressure to avoid software regression, including performance regressions. Continuous performance benchmarking, as a part of continuous integration and other code quality testing, is an industry best-practice to understand how the performance of a software product evolves. We present a framework, built from industry best practices and tools, to help to understand ROOT code performance and monitor the efficiency of the code for several processor architectures. It additionally allows historical performance measurements for ROOT I/O, vectorization and parallelization sub-systems.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Towards Continuous Benchmarking: An Automated Performance Evaluation Framework for High Performance Software
    Anzt, Hartwig
    Chen, Yen-Chen
    Cojean, Terry
    Dongarra, Jack
    Flegar, Goran
    Nayak, Pratik
    Quintana-Orti, Enrique S.
    Tsai, Yuhsiang M.
    Wang, Weichung
    [J]. PROCEEDINGS OF THE PLATFORM FOR ADVANCED SCIENTIFIC COMPUTING CONFERENCE (PASC '19), 2019,
  • [2] A framework for benchmarking performance of switch fabrics
    Ayandeh, S
    [J]. ICT'2003: 10TH INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS, VOLS I AND II, CONFERENCE PROCEEDINGS, 2003, : 1650 - 1655
  • [3] Benchmarking human performance for continuous speech recognition
    Deshmukh, N
    Duncan, RJ
    Ganapathiraju, A
    Picone, J
    [J]. ICSLP 96 - FOURTH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, VOLS 1-4, 1996, : 2486 - 2489
  • [4] Using Benchmarking Bots for Continuous Performance Assessment
    Markusse, Florian
    Leitner, Philipp
    Serebrenik, Alexander
    [J]. IEEE SOFTWARE, 2022, 39 (05) : 50 - 55
  • [5] A benchmarking framework for understanding bus performance in the US
    Morse, Lindsey
    Trompet, Mark
    Barron, Alexander
    Anderson, Richard
    Graham, Daniel J.
    [J]. BENCHMARKING-AN INTERNATIONAL JOURNAL, 2020, 27 (04) : 1533 - 1550
  • [6] A framework for benchmarking of homogenisation algorithm performance on the global scale
    Willett, K.
    Williams, C.
    Jolliffe, I. T.
    Lund, R.
    Alexander, L. V.
    Broennimann, S.
    Vincent, L. A.
    Easterbrook, S.
    Venema, V. K. C.
    Berry, D.
    Warren, R. E.
    Lopardo, G.
    Auchmann, R.
    Aguilar, E.
    Menne, M. J.
    Gallagher, C.
    Hausfather, Z.
    Thorarinsdottir, T.
    Thorne, P. W.
    [J]. GEOSCIENTIFIC INSTRUMENTATION METHODS AND DATA SYSTEMS, 2014, 3 (02) : 187 - 200
  • [7] Benchmarking enablers to achieve growth performance: a conceptual framework
    Bhattacharya, Sanjay
    Momaya, Kirankumar S.
    Iyer, K. Chandrasekhar
    [J]. BENCHMARKING-AN INTERNATIONAL JOURNAL, 2020, 27 (04) : 1475 - 1501
  • [8] BenchTool: a framework for the automation and standardization of HPC performance benchmarking
    Cawood, Matthew
    Harrell, Stephen Lien
    Evans, Todd
    [J]. PRACTICE AND EXPERIENCE IN ADVANCED RESEARCH COMPUTING 2021, PEARC 2021, 2021,
  • [9] Parameterizable benchmarking framework for designing a MapReduce performance model
    Zhang, Zhuoyao
    Cherkasova, Ludmila
    Loo, Boon Thau
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2014, 26 (12): : 2005 - 2026
  • [10] The IT maintenance market and the continuous improvement imperative: Performance benchmarking in Olivetti
    Madeddu, S
    [J]. SELF-ASSESSMENT AND BENCHMARKING: THE KEY TO STRATEGIC IMPROVEMENT PLANNING, 1996, : 103 - 112