After its first shutdown, LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The updated system is radically different from the previous implementation, both in terms of architecture and expected performance. The main architecture has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process.
The trigger system in ATLAS consists of a hardware Level-1 (L1) and a software based high-level
trigger (HLT) that reduces the event rate from the design
bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz.
The pre-existing two-level software filtering, known as L2 and the Event Filter, are now merged into a single process, performing incremental data collection and analysis.
This design has many advantages, among which are: the radical simplification of the architecture, the flexible and automatically balanced distribution of the computing resources, the sharing of code and services on nodes. In addition, logical farm slicing, with each slice managed by a dedicated supervisor, has been dropped in favour of global management by a single farm master operating at 100 kHz. The resulting merged network, that connects the HLT processing nodes to the Readout and the storage systems has evolved to provide network connectivity as required by the new Data Flow architecture, with aggregate throughput and port density increased by an order of magnitude.
Moreover, many upgrades have been implemented during this two-year shutdown in the trigger components, in order to cope with the increased trigger rates while maintaining or even improving the selectivity on relevant physics processes. This upgrade includes changes to the L1 muon and calorimeter trigger, the introduction of a new L1 topological trigger and impressive performance improvements in the HLT trigger algorithms used to
identify leptons, hadrons and global event quantities, like missing transverse energy.
We will discuss the design choices and the strategies employed to minimize the data-collection and the selection latency. And finally we will show the results of tests done during the commissioning phase and the operational performance after the first months of data taking.