Speaker
Description
The LHCb experiment is currently undergoing its Upgrade I, which will allow it to collect data at a five-times larger instantaneous luminosity. In a decade from now, the Upgrade II of LHCb will prepare the experiment to face another ten-fold increase in instantaneous luminosity. Such an increase in event complexity will pose unprecedented challenges to the online-trigger system, for which a solution needs to be found. On one side, the current algorithms would be too slow to deal with the high level of particle combinatorics. On the other side, the event size will become too large to afford the persistence of all the objects in the event for offline processing. This will oblige to make a very accurate selection of the interesting parts in each event for all the possible channels, which constitutes a gargantuan task. In addition to the challenges for the trigger, the new conditions will also bring a large increase in background levels for many of the offline data analyses, due to the enlarged particle combinatorics.
We propose a combined solution to the previous problems that has never been attempted before at the LHCb experiment due to its complexity: the substitution of the current signal-based trigger approach by a Deep-learning based Full Event Interpretation (DFEI) method. Specifically, we propose a new algorithm that would process in real time the final-state particles of each event, identifying which of them come from the decay of a beauty or charm heavy hadron and reconstructing the hierarchical decay chain through which they were produced. This high-level reconstruction would allow to automatically and accurately identify the part of the event which is interesting for physics analysis, allowing to safely discard the rest of the event. Complementary, it would provide an automatised and powerful way to suppress the background in many future LHCb analyses. All in all, a DFEI approach can revolutionise the way event reconstruction is performed at LHCb and pave the way for a step-change in its physics reach.
In this talk, we present the conceptualisation, construction, training and performance of the first prototype of the DFEI algorithm, specialised for charged particles produced in beauty-hadron decays. The algorithm is based on a composition of Graph Neural Network (GNN) models, designed to handle the complexity of high-multiplicity events in a computationally-efficient way. To be processed, each collision event is transformed into a graph, where the final-state particles are represented as nodes and the relations between them are represented as edges. A first GNN model has the goal of removing a fraction of the nodes that have not been produced in the decay of any beauty hadron. The output of that model is passed as input to a second one, whose aim is to remove a fraction of the edges between particles that don’t share the same beauty-hadron ancestor. Finally, a third GNN model takes the output of the previous algorithm, and aims at inferring the so-called “lowest common ancestor” (LCA) of each edge (a technique similar to the recently proposed LCA-matrix reconstruction for the Belle II experiment). The output of the DFEI processing chain can be directly translated into a set of filtered final-state particles and their inferred ancestors, with the predicted hierarchical relations amongst them.
The algorithm has been trained on simulation events containing at least one beauty hadron (inclusive decay), obtained with a PYTHIA-based simulation in which the particle-collision conditions expected for the LHC Run 3 are replicated, and an approximate emulation of the LHCb detection and reconstruction effects is applied. Only particles in the LHCb geometric acceptance are considered, which leads to an average of around 150 charged particles per event, out of which typically less than 10 have been produced in the decay of (up to several) beauty hadrons. Graphic Processing Units (GPU) are used as hardware accelerators to reduce training times. The final algorithm shows a very good performance when evaluated on the described simulation dataset, with negligible overtraining visible. These first results give promising prospects towards an eventual usage of the algorithm in LHCb, and open the door to future developments and expansions for it.
In-person participation | Yes |
---|