Benchmarking the ATLAS software through the Kit Validation engine

被引:5
|
作者
De Salvo, Alessandro [1 ]
Brasolin, Franco [2 ]
机构
[1] Ist Nazl Fis Nucl, Sez Roma, Rome, Italy
[2] Ist Nazl Fis Nucl, Sez Bologna, Bologna, Italy
关键词
D O I
10.1088/1742-6596/219/4/042037
中图分类号
O57 [原子核物理学、高能物理学];
学科分类号
070202 ;
摘要
The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Software Validation in ATLAS
    Hodgkinson, Mark
    Seuster, Rolf
    Simmons, Brinick
    Rousseau, David
    Sherwood, Peter
    [J]. INTERNATIONAL CONFERENCE ON COMPUTING IN HIGH ENERGY AND NUCLEAR PHYSICS 2012 (CHEP2012), PTS 1-6, 2012, 396
  • [2] Physics and software validation for ATLAS
    Costanzo, D.
    Pacheco, A.
    Vivarelli, I.
    [J]. 17TH INTERNATIONAL CONFERENCE ON COMPUTING IN HIGH ENERGY AND NUCLEAR PHYSICS (CHEP09), 2010, 219
  • [3] TopFat methodology implemented in a commercial software: benchmarking validation
    Caivano, Riccardo
    Tridello, Andrea
    Paolino, Davide
    Berto, Filippo
    [J]. SECOND EUROPEAN CONFERENCE ON THE STRUCTURAL INTEGRITY OF ADDITIVELY MANUFACTURED MATERIALS, 2021, 34 : 221 - 228
  • [4] SOFTWARE BENCHMARKING
    JONES, C
    [J]. COMPUTER, 1995, 28 (10) : 102 - 103
  • [5] Better jet engine design through software
    不详
    [J]. INTECH, 2002, 49 (12) : 16 - 16
  • [6] How to create a benchmarking tool kit
    Stillwell, K
    [J]. ASQ'S 54TH ANNUAL QUALITY CONGRESS PROCEEDINGS, 2000, : 718 - 719
  • [7] Improving engine control reliability through software optimization
    Phadke, MS
    Smith, LR
    [J]. ANNUAL RELIABILITY AND MAINTAINABILITY SYMPOSIUM, 2004 PROCEEDINGS, 2004, : 634 - 640
  • [8] Benchmarking software organizations
    Card, D
    Zubrow, D
    [J]. IEEE SOFTWARE, 2001, 18 (05) : 16 - 17
  • [9] Improving software investments through requirements validation
    Cohen, D
    Larson, G
    Ware, B
    [J]. 26TH ANNUAL NASA GODDARD SOFTWARE ENGINEERING WORKSHOP, PROCEEDINGS, 2002, : 106 - 114
  • [10] Patterns for Workflow Engine Benchmarking
    Harrer, Simon
    Kopp, Oliver
    Lenhard, Joerg
    [J]. ADVANCES IN SERVICE-ORIENTED AND CLOUD COMPUTING (ESOCC 2016), 2018, 707 : 151 - 163