Conveners
GPU in High Level Trigger (3/3)
- Andrea Messina (ROMA1)
Mr
Daniel Hugo Campora Perez
(CERN)
9/11/14, 4:30 PM
Talk
The LHCb trigger is a real time system with high computation requirements, where incoming data from the LHCb detector is analyzed and selected by applying a chain of algorithms. The infrastructure that sustains the current trigger consists of Intel Xeon based servers, and is designed for sequential execution. We have extended the current software infrastructure to include support for offloaded...
Stefano Gallorini
(PD)
9/11/14, 5:00 PM
Talk
The LHCb experiment is entering in its upgrading phase, with its detector and read-out system re-designed to cope with the increased LHC energy after the long shutdown of 2018. In this upgrade, a trigger-less data acquisition is being developed to read-out the full detector at the bunch-crossing rate of 40 MHz. In particular, the High Level Trigger (HLT) system, where the bulk of the trigger...
Mr
Felice Pantaleo
(CERN)
9/11/14, 5:30 PM
Talk
The Large Hadron Collider is presently undergoing work to increase the centre-of-mass energy to 13 TeV and to reach much higher beam luminosity. It is scheduled to return to operation in early 2015.
With the increasing amount of data delivered by the LHC, the experiments are facing enormous challenges to adapt their computing resources, also in terms of CPU usage. This trend will continue...
Dr
Denis Oliveira Damazio
(Brookhaven National Laboratory), Mr
Jacob Howard
(University of Oxford)
9/11/14, 6:00 PM
Talk
The potential of GPUs has been evaluated as a possible way to accelerate trigger algorithms for the ATLAS experiment located at the Large Hadron Collider (LHC). During LHC Run-1 ATLAS employed a three-level trigger system to progressively reduce the LHC collision rate of 20 MHz to a storage rate of about 600 Hz for offline processing. Reconstruction of charged particles trajectories through...
Mr
Maik Dankel
(CERN), Dr
Sami Kama
(Southern Methodist University Dallas/US)
9/11/14, 6:30 PM
Talk
Modern HEP experiments produce tremendous amounts of data. These data
are processed by in-house built software frameworks which have lifetimes longer than the detector itself. Such frameworks were traditionally based on serial code and relied on advances in CPU technologies, mainly clock frequency, to cope with increasing data volumes. With the advent of many-core architectures and GPGPUs...