Speaker
Description
The CMS experiment has been designed with a two-level trigger system: the L1 Trigger, hardware based, and the High Level Trigger (HLT), a streamlined version of the CMS event reconstruction software running on a computer farm. During its “Phase 2” the LHC will reach a luminosity of $7\times 10^{34}\,cm^{-2}s^{-1}$, with an average of 200 simultaneous collisions (pile-up), and the L1 output rate will increase up to 750 kHz. All of this will present an unprecedented challenge to the HLT, requiring a processing power larger than today by a factor 20.
This exceeds by far the expected speed-up for conventional CPUs, demanding an alternative approach both in algorithm design and hardware selection. On one hand industry and HPC have been using heterogeneous computing platforms achieving higher throughput and better energy efficiency by matching each job to the most appropriate architecture. On the other hand, deep learning based techniques are widely spreading also in the context of event reconstruction and may ease handling specific data-oriented tasks, such as seed and track selection.
The reliable use of a heterogeneous platform at the CMS HLT, involving also the integration of machine learning based solutions, requires the assessment of performance and characteristics, which can be attained by running a prototype in production during Run 3. Its integration in the CMS reconstruction software depends upon improvements to its framework and scheduling, tailoring the algorithms to fit the different architectures. This presentation will describe the results of the development and the characteristics of the system, along with its future perspectives.