Speakers
Description
At CERN’s Large Hadron Collider (LHC), hardware trigger systems are crucial in the first stages of data processing: they select a tiny fraction of the 40 million collision events per second for further analysis, within a few microseconds.
Machine Learning (ML) techniques being used more and more frequently to enable the efficient selection of extremely rare events.
These ML algorithms are deployed on custom computing platforms equipped with Field-Programmable Gate Arrays (FPGAs) to satisfy the extreme throughput and latency constraints.
Moreover, the loss of valuable data can be further minimised by decorrelating these algorithms from certain features in this data, thus ensuring that their performance remains robust across varying conditions.
For example, rare event searches at the LHC require methods that can effectively leverage both simulated and real-world data to train robust and accurate models.
Additionally, anomaly detection methods are highly susceptible to biases in the data, such as pile-up.
% Standard invariant representation techniques face significant challenges due to the stringent throughput and latency constraints.
In this work, we propose novel methods for learning invariant representations designed for ultra-fast inference and high-throughput edge applications.
Our contributions are fourfold:
i) we introduce mutual information-based measures to learn invariant representations for both supervised and unsupervised domains;
ii) we develop a new dataset to benchmark and evaluate these techniques;
iii) we implement a stochastic Bernoulli-layer in hardware description language (HDL) to enable seamless integration into FPGAs; and
iv) we demonstrate these techniques on two important physics applications: precision measurements of the $\tau \to 3 \mu$ process in the LHCb experiment, and real-time anomaly detection in the CMS Experiment.
AI keywords | Real-Time ML, anomaly detection, FPGAs |
---|