Speaker
Description
The upgrades in view of the HL-LHC runs set ambitious computational requirements: ATLAS expects to record data at a rate of 10 kHz, which is approximately ten times larger than the 1 kHz achieved so far.
The increase in the data recording rate requires a matching increase in the production of simulated data, which is already taking up almost 40% of the CPU hours consumed by the ATLAS experiment.
An intensive Research and Development effort is ongoing to upscale the ATLAS simulation infrastructure in order to achieve tangible speed-ups and a large number of different paths are currently being investigated.
Those optimizations achievable during the building process of the simulation code proved to have a significant impact in terms of improving the execution times: a speed-up of up to 7% of the full Geant4 simulations has been obtained by combining the effect of different build types and several compiler's optimization options.
Attention has also been focused on evaluating the impact of the use of newer and simpler shape definitions for the detector geometry construction: a reduction of 5-6% has been observed.
Several other research lines are currently active, focusing, for instance, on the intrinsic performance improvements coming with newer Geant4 releases, on the optimization of Geant4 physics parameters, statistical decision algorithms and range cuts and on the investigation of modern CPU vectorized registries.
The overall effect of the ongoing optimization efforts has already achieved a combined reduction of the order of 30% of the execution times.
In this talk, the most recent achievements of the ATLAS Geant4 Optimization Task Force, the ongoing studies and the projections on the HL-LHC phase are reviewed.