- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Meeting room: https://l.infn.it/mlinfn-room
Minute: https://l.infn.it/mlinfn-minutes
We present a prototype version of a new “fast” simulation framework that we named FlashSim, targeting analysis level data tiers (such as CMS NanoAOD). Such a simulation software is based on Machine Learning, in particular exploiting the Normalizing Flows generative model, and archives significant speedups over existing alternatives. Analyses in HEP experiments often rely on large MC simulated datasets. These datasets are usually produced with full-simulation approaches based on Geant4 or exploiting parametric “fast” simulations introducing approximations and reducing the computational cost.
We will present the physics results achieved with our prototype, currently simulating only a few physics objects collections, in terms of: 1) accuracy of object properties, 2) correlations among paris of observables, 3) comparisons of analysis level derived quantities and discriminators between full-simulation and flash-simulation of the very same events. The speed up obtained with such an approach is of several orders of magnitude, so that when using flashsim the simulation bottleneck is represented by the “generator” (e.g. Pythia) step. We further investigated upsampling techniques, reusing the same “generated event” passing it multiple times through the detector simulation, in order to understand the increase in statistical precision that could be ultimately achieved. The results achieved with the current prototype show a higher physics accuracy and a lower computing cost compared to other fast simulation approaches such as CMS standard fastsim and Delphes based simulations.