Speaker
Description
The ATLAS experiment has been running smoothly up to 800 k reconstruction, simulation and analysis jobs simultaneously last year to finalize run2 analyses and prepare the upcoming run3. Several important changes have been introduced in the computing model both from the hardware and software sides. One of the most important achievements is the deployment of the multithreaded version of the event reconstruction code (AthenaMT) which allows a drastic reduction of the use of the memory, a mandatory feature to run efficiently on modern multi-core machines. In addition a very detailed review of the possible tunings in the simulation code has been carried out identifying several changes which lead to an expected speed up of the simulation time up to 30% without degrading the simulation quality. In addition a deep review of the analysis model has been performed : a unified analysis format has been identified which is expected to be used by more than 80% of the analyses in order to optimize the available disk capacity. On the operations side, the ATLAS experiment has seen several changes: on one side the first test on the usage of pre-exascale HPC resources (VEGA) and the full deployment of the data carousel mode, a new way to operate with data stored on tape which now becomes a crucial part in both the reconstruction and analysis models. In this talk an overview of the last-year ATLAS operations performance and improvements will be reported together with some projections on the HL-LHC phase.