Description
Detector simulation is a crucial component of the LHC experiments' analysis activities, consuming a significant amount of computational resources (approximately 60%). The most resource-intensive part is the simulation of electromagnetic and hadronic showers in the calorimeters, which alone accounts for 80% of the computational resources used by detector simulations. The load is expected to increase in the future with the activation of the HL-LHC. For this reason, various solutions are being developed to alleviate this burden.
The first is the introduction of fast simulation systems, capable of simulating the detector's response more quickly and with fewer resources compared to the complete Geant4 simulation system, while still ensuring good accuracy. The ATLAS collaboration has developed AtlFast3 as a fast simulation system, which combines a classic parametric approach with a Machine Learning system based on GANs (Generative Adversarial Networks). The system is already in production with the current LHC data-taking Run-3.
Another strategy is to deploy the most demanding phase of the fast simulation, the training of AtlFast3's GANs, on computing infrastructures in addition to those typically used (such as CERN's batch system and the Worldwide LHC Computing Grid). Thus, FastCaloGANtainer has been developed, a container-based system that allows the training of neural networks on High Performance Computing (HPC) farms like Leonardo (CINECA, Bologna), ensuring appropriate independence from the system on which it is deployed.
This contribution presents the AtlFast3 and FastCaloGANtainer systems, discussing their technical details and how they benefit simulation activities in terms of physics and computational performance, as well as ideas for future developments. For FastCaloGANtainer, performance is also compared with different hardware and software configurations, with or without GPUs, on various clusters including the Leonardo supercomputer at CINECA, where the first use case for the ATLAS experiment is presented.