Speaker
Description
In experimental particle physics, the development of analyses depends heavily on the accurate simulation of background processes, including both the particle collisions/decays and their subsequent interactions with the detector. However, for any specific analysis, a large fraction of these simulated events is discarded by a selection tailored to identifying interesting events to study. At Belle II, the simulation of the particle collision and subsequent particle decays is much more computationally efficient than the rest of the simulation and reconstruction chain. Thus, the computational cost of generating large simulated datasets for specific selections could be reduced by predicting which events will pass the selection before running the costly part of the simulation. Deep Neural Networks have shown promise in solving this task even when there is no obvious correlation between quantities available at this stage and after the full simulation. These models, however, must be trained on preexisting large datasets, which especially for selections with low efficiencies defeats their own purpose. To solve this issue, we present how a model, pre-trained on multiple different selections for which a lot of training data is available, can be fine-tuned on a small dataset for a new low-efficiency selection or for a known selection with different detector/software conditions.
AI keywords | transformers; transfer learning; fine-tuning; classification |
---|