6โ€“13 Jul 2022
Bologna, Italy
Europe/Rome timezone

Session

Computing and Data handling

Computing
8 Jul 2022, 09:00
Room 12 (Celeste)

Room 12 (Celeste)

Conveners

Computing and Data handling

  • James Letts (University of California San Diego (UCSD))
  • Daniele Bonacorsi (Istituto Nazionale di Fisica Nucleare)

Computing and Data handling

  • Daniele Bonacorsi (Istituto Nazionale di Fisica Nucleare)
  • Andrew McNab (University of Manchester)

Computing and Data handling

  • James Letts (University of California San Diego (UCSD))
  • Daniele Bonacorsi (Istituto Nazionale di Fisica Nucleare)

Computing and Data handling

  • Daniele Bonacorsi (Istituto Nazionale di Fisica Nucleare)
  • Frank Gaede (DESY)

Computing and Data handling

  • Graeme A Stewart (CERN)
  • Daniele Bonacorsi (Istituto Nazionale di Fisica Nucleare)

Computing and Data handling

  • Andrew McNab (University of Manchester)
  • Daniele Bonacorsi (Istituto Nazionale di Fisica Nucleare)

Computing and Data handling

  • Daniele Bonacorsi (Istituto Nazionale di Fisica Nucleare)
  • Frank Gaede (DESY)

Computing and Data handling

  • Graeme A Stewart (CERN)
  • Daniele Bonacorsi (Istituto Nazionale di Fisica Nucleare)

Presentation materials

There are no materials yet.

  1. Thomas Carter
    08/07/2022, 09:00
    Computing and Data handling
    Parallel Talk

    AtlFast3 is the next generation of high precision fast simulation in ATLAS that is being deployed by the collaboration and will replace AtlFastII, the fast simulation tool that was successfully used until now. AtlFast3 combines a parametrization-based Fast Calorimeter Simulation and a new machine-learning based Fast Calorimeter Simulation based on Generative Adversarial Networks (GANs). The...

    Go to contribution page
  2. Badder Marzocchi (Northeastern University (US))
    08/07/2022, 09:15
    Computing and Data handling
    Parallel Talk

    The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL). These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material causes electrons and...

    Go to contribution page
  3. Peter McKeown (Deutsches Elektronen-Synchrotron (DE))
    08/07/2022, 09:30
    Computing and Data handling
    Parallel Talk

    While simulation is a crucial cornerstone of modern high energy physics, it places a heavy burden on the available computing resources. These computing pressures are expected to become a major bottleneck for the upcoming high luminosity phase of the LHC and for future colliders, motivating a concerted effort to develop computationally efficient solutions. Methods based on generative machine...

    Go to contribution page
  4. Lucio Anderlini (Istituto Nazionale di Fisica Nucleare)
    08/07/2022, 09:45
    Computing and Data handling
    Parallel Talk

    During Run 2 of the Large Hadron Collider at CERN, the LHCb experiment has spent more than 80% of the pledged CPU time to produce simulated data samples. The upcoming upgraded version of the experiment will be able to collect larger data samples, requiring many more simulated events to analyze the data to be collected in Run 3. Simulation is a key necessity of analysis to interpret signal vs...

    Go to contribution page
  5. Konstantin Androsov (EPFL)
    08/07/2022, 10:00
    Computing and Data handling
    Parallel Talk

    Tau leptons are a key ingredient to perform many Standard Model measurements and searches for new physics at LHC. The CMS experiment has released a new algorithm to discriminate hadronic tau lepton decays against jets, electrons, and muons. The algorithm is based on a deep neural network and combines fully connected and convolutional layers. It combines information from all individual...

    Go to contribution page
  6. Juliรกn Garcรญa Pardiรฑas (University of Milano-Bicocca)
    08/07/2022, 10:15
    Computing and Data handling
    Parallel Talk

    The LHCb experiment is currently undergoing its Upgrade I, which will allow it to collect data at a five-times larger instantaneous luminosity. In a decade from now, the Upgrade II of LHCb will prepare the experiment to face another ten-fold increase in instantaneous luminosity. Such an increase in event complexity will pose unprecedented challenges to the online-trigger system, for which a...

    Go to contribution page
  7. Ionela Lavinia Raluca Cruceru (CERN)
    08/07/2022, 11:15
    Computing and Data handling
    Parallel Talk

    The ALICE Collaboration has just finished a major detector upgrade which increases the data-taking rate capability by two orders of magnitude and will allow to collect unprecedented data samples. For example, the analysis input for 1 month of Pb-Pb collisions amounts to about 5 PB. In order to enable analysis on such large data samples, the ALICE distributed infrastructure was revised and...

    Go to contribution page
  8. Ross Corliss (Stony Brook University)
    08/07/2022, 11:30
    Computing and Data handling
    Parallel Talk

    The sPHENIX detector is a next generation experiment being constructed at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. Starting next year it will collect high statistics data sets from ultra relativistic Au+Au, p+p and p+Au collisions. The readout is a combination of triggered readout for calorimeters and streaming readout for the silicon pixel/strip detectors...

    Go to contribution page
  9. Wen Guan
    08/07/2022, 11:45
    Computing and Data handling
    Parallel Talk

    The intelligent Data Delivery Service (iDDS) has been developed to cope with the huge increase
    of computing and storage resource usage in the coming LHC data taking. It has been designed
    to intelligently orchestrate workflow and data management systems, decoupling data
    pre-processing, delivery, and primary processing in large scale workflows. It is an experiment-agnostic service that has...

    Go to contribution page
  10. Paolo Girotti (Pi)
    08/07/2022, 12:00
    Computing and Data handling
    Parallel Talk

    The Muon $g-2$ Experiment at Fermilab aims to measure the muon anomalous magnetic moment with the unprecedented precision of 140 parts-per-billion (ppb). In April 2021 the collaboration published the first measurement, relative to the first year of data taking. The result confirmed the previous experiment at Brookhaven National Laboratory (BNL), and increased the long-standing tension with the...

    Go to contribution page
  11. Davide Fazzini (Istituto Nazionale di Fisica Nucleare)
    08/07/2022, 12:15
    Computing and Data handling
    Parallel Talk

    The LHCb experiment has undergone a comprehensive upgrade in preparation for data taking in 2022 and beyond. The offline computing model has been completely redesigned in order to process the much higher data volumes originating from the detector and the associated demands of simulated samples of ever-increasing size. This contribution presents the evolution of the data processing model with a...

    Go to contribution page
  12. Frantiลกek Voldล™ich (student)
    08/07/2022, 12:30
    Computing and Data handling
    Parallel Talk

    Cerenkov Differential counters with Achromatic Ring focus (CEDARs) in the COMPASS experiment beamline were designed to identify particles in limited intensity beams with divergence below 65ฮผrad. However, in the 2018 data taking, a beam with a 15 times higher intensity and a beam divergence of up to 300ฮผrad was used, hence the standard data analysis method could not be used. A machine learning...

    Go to contribution page
  13. Marco Lorusso (Istituto Nazionale di Fisica Nucleare)
    08/07/2022, 14:30
    Computing and Data handling
    Parallel Talk

    In the past few years, using Machine and Deep Learning techniques has become more and more viable, thanks to the availability of tools which allow people without specific knowledge in the realm of data science and complex networks to build AIs for a variety of research fields. This process has encouraged the adoption of such techniques: in the context of High Energy Physics, new algorithms...

    Go to contribution page
  14. Nick Fritzsche (IKTP, TU Dresden)
    08/07/2022, 14:45
    Computing and Data handling
    Parallel Talk

    After LS3 the LHC will increase its instantaneous luminosity by a factor of 7, leading to the High Luminosity LHC (HL-LHC). At the HL-LHC, the number of proton-proton collisions in one bunch crossing (called pileup) will increase significantly, putting more stringent requirements on the LHC detectors' electronics and real-time data processing capabilities.

    The ATLAS Liquid Argon (LAr)...

    Go to contribution page
  15. Kazuki Todome (Istituto Nazionale di Fisica Nucleare)
    08/07/2022, 15:00
    Computing and Data handling
    Parallel Talk

    he ATLAS experiment plans to upgrade its Trigger DAQ system dedicated to HL-LHC. Due to the expected large amount of data, one of the key upgrades is how to filter the events in a short time. Part of the filtering is performed based on calorimeter and muon spectrometer information, and then further event filtering is done in the Event Filter (EF) system with data including ones from the inner...

    Go to contribution page
  16. Dr Simranjit Singh Chhibra (CERN)
    08/07/2022, 15:15
    Computing and Data handling
    Parallel Talk

    We propose a signal-agnostic strategy to reject QCD jets and identify anomalous signatures in a High Level Trigger (HLT) system at the LHC. Soft unclustered energy patterns (SUEP) could be such a signal โ€” predicted in models with strongly-coupled hidden valleys โ€” primarily characterized by a nearly spherically-symmetric signature of an anomalously large number of soft charged particles, in...

    Go to contribution page
  17. Viviana Cavaliere (Brookhaven National Laboratory)
    08/07/2022, 15:30
    Computing and Data handling
    Parallel Talk

    This submission describes revised plans for Event Filter Tracking in the upgrade of the ATLAS
    Trigger and Data Acquisition system for the high pileup environment of the High-Luminosity
    Large Hadron Collider (HL-LHC). The new Event Filter Tracking system is a flexible,
    heterogeneous commercial system consisting of CPU cores and possibly accelerators (e.g.,
    FPGAs or GPUs) to perform the...

    Go to contribution page
  18. Thiago Rafael Tomei Fernandez (SPRACE-Unesp)
    08/07/2022, 15:45
    Computing and Data handling
    Parallel Talk

    The High-Luminosity LHC (HL-LHC) will usher a new era in high-energy physics. The HL-LHC experimental conditions entail an instantaneous luminosity of up to 75 Hz/nb and up to 200 simultaneous collisions per bunch crossing (pileup). To cope with those conditions, the CMS detector will undergo a series of improvements, in what is known as the Phase-2 upgrade. In particular, the upgrade of the...

    Go to contribution page
  19. Martin Zemko (Czech Technical University in Prague)
    08/07/2022, 16:00
    Computing and Data handling
    Parallel Talk

    We developed a novel free-running data acquisition system for the AMBER experiment. The system is based on a hybrid architecture containing scalable FPGA cards for data collection and conventional distributed computing. The current implementation is capable to collect up to 10 GB/s sustained data rate. The data reduction is performed by the filtration farm that decreases the incoming data rate...

    Go to contribution page
  20. Enrico Guiraud (EP-SFT, CERN)
    08/07/2022, 17:00
    Computing and Data handling
    Parallel Talk

    Several recent advancements in ROOT's analysis interfaces enable the development
    of high-performance, highly parallel analyses in C++ and Python -- without requiring expert knowledge of multi-thread parallelization or ROOT I/O.
    ROOT's RDataFrame is a modern interface for data processing that provides a natural entry point to many of these advancements. Power users can extend existing...

    Go to contribution page
  21. Enrico Guiraud (EP-SFT, CERN)
    08/07/2022, 17:15
    Computing and Data handling
    Parallel Talk

    Deep neural networks are rapidly gaining popularity in physics research. While python-based deep learning frameworks for training models in GPU environments develop and mature, a good solution that allows easy integration of inference of trained models into conventional C++ and CPU-based scientific computing workflow seems lacking.

    We report the latest development in ROOT/TMVA that aims to...

    Go to contribution page
  22. Zef Wolffs (NIKHEF)
    08/07/2022, 17:30
    Computing and Data handling
    Parallel Talk

    RooFit is a toolkit for statistical modeling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments. As the LHC program progresses, physics analysis becomes more computationally demanding. Therefore, the focus of RooFit developments in recent years was performance...

    Go to contribution page
  23. Matthew Feickert (University of Illinois at Urbana-Champaign)
    08/07/2022, 17:45
    Computing and Data handling
    Parallel Talk

    The HistFactory p.d.f. template is per-se independent of its implementation in ROOT and it is useful to be able to run statistical analysis outside of the ROOT, RooFit, RooStats framework. pyhf is a pure-Python implementation of that statistical model for multi-bin histogram-based analysis and its interval estimation is based on the asymptotic formulas of "Asymptotic formulae for...

    Go to contribution page
  24. Andrea Valassi
    08/07/2022, 18:00
    Computing and Data handling
    Parallel Talk

    Event Generators simulate particle interactions using Monte Carlo processes providing the primary connection between experiment and theory in experimental high energy physics. These make up the first step in the simulation workflow of collider experiments, representing 10-20% of the annual WLCG usage for the ATLAS and CMS experiments. With computing architectures becoming more heterogeneous,...

    Go to contribution page
  25. Mr Benjamin Huth (University of Regensburg)
    08/07/2022, 18:15
    Computing and Data handling
    Parallel Talk

    Machine learning is a promising field to augment and potentially replace part of the event reconstruction of high energy physics experiments. This is partly due to the fact that many machine learning algorithms offer relatively easy portability to heterogeneous hardware, and thus could play an important role in controlling the computing budget of future experiments. In addition, the capability...

    Go to contribution page
  26. Savannah Thais (Princeton University)
    08/07/2022, 18:30
    Computing and Data handling
    Parallel Talk

    The determination of charged particle trajectories in collisions at the CERN Large Hadron Collider (LHC) is an important but challenging problem, especially in the high interaction density conditions expected during the future high-luminosity phase of the LHC (HL-LHC). Graph neural networks (GNNs) have been successfully applied to this problem by representing tracker hits as nodes in a graph,...

    Go to contribution page
  27. Alexander Held
    09/07/2022, 09:00
    Computing and Data handling
    Parallel Talk

    Analysis workflows commonly used at the LHC experiments do not scale to the requirements of the HL-LHC. To address this challenge, a rich research and development program is ongoing, proposing new tools, techniques, and approaches. The IRIS-HEP software institute and its partners are bringing together many of these developments and putting them to the test in a project called the โ€œAnalysis...

    Go to contribution page
  28. Frank Gaede (DESY)
    09/07/2022, 09:15
    Computing and Data handling
    Parallel Talk

    A shared, common event data model, EDM4hep, is an integral part of the Key4hep project. EDM4hep aims to be usable by all future collider projects, despite their different collision environments and the different detector technologies that are under discussion. This constitutes a major challenge that EDM4hep addresses by using podio, a C++ toolkit for the creation and handling of event data...

    Go to contribution page
  29. Valentin Volkl (CERN)
    09/07/2022, 09:30
    Computing and Data handling
    Parallel Talk

    Detector studies for future experiments rely on advanced software tools to estimate performance and optimize their design and technology choices. The Key4hep project provides a turnkey solution for the full experiment life-cycle based on established community tools such as ROOT, Geant4, DD4hep, Gaudi, podio and spack. Members of the CEPC, CLIC, EIC, FCC, and ILC communities have joined to...

    Go to contribution page
  30. Wouter Deconinck (University of Manitoba)
    09/07/2022, 09:45
    Computing and Data handling
    Parallel Talk

    Modern HEP experiments are invested heavily in software. The success of physics discoveries hinges on software quality for data collection, processing, analysis, and the ability of users to learn and utilize it quickly. While each experiment has its own flavor of software, it is mostly derived from tools in the common domain. However, most users learn software skills only after joining a...

    Go to contribution page
  31. Avik Roy (University of Illinois at Urbana-Champaign)
    09/07/2022, 10:00
    Computing and Data handling
    Parallel Talk

    In recent years, digital object management practices to support findability, accessibility, interoperability, and reusability (FAIR) have begun to be adopted across a number of data-intensive scientific disciplines. These digital objects include datasets, AI models, software, notebooks, workflows, documentation, etc. With the collective dataset at the Large Hadron Collider scheduled to reach...

    Go to contribution page
  32. Michaล‚ Mazurek (CERN)
    09/07/2022, 10:15
    Computing and Data handling
    Parallel Talk

    The LHCb experiment is resuming operation in Run3 after a major upgrade. New software exploiting modern technologies for all data processing and in the underlying LHCb core software framework is part of the upgrade. The LHCb simulation framework, Gauss, had to be adapted accordingly, with the additional constraint that it also relies on external simulation libraries. At the same time a...

    Go to contribution page
  33. Simon Rothman (Massachusetts Inst. of Technology (US))
    09/07/2022, 11:15
    Computing and Data handling
    Parallel Talk

    Nearly all physics analyses at CMS rely on precise reconstruction of particles from their signatures in the experimentโ€™s calorimeters. This requires both assignment of energy deposits to particles and recovery of various properties across the detector. These tasks have traditionally been performed by classical algorithms and BDT regressions, both of which rely on human-engineered high level...

    Go to contribution page
  34. Dr Juan Manuel Cruz Martinez (Universitร  degli Studi di Milano)
    09/07/2022, 11:30
    Computing and Data handling
    Parallel Talk

    We present MadFlow, a python-based software for the evaluation of cross sections utilizing hardware accelerators.

    The pipeline includes a first stage where the analytic expressions for matrix elements are generated by the MG5_aMC@NLO framework (taking advantage of its great flexibility) and exported in a vectorized device-agonstic format using the TensorFlow library or a device specific...

    Go to contribution page
  35. Federica Legger (Istituto Nazionale di Fisica Nucleare)
    09/07/2022, 11:45
    Computing and Data handling
    Parallel Talk

    In recent years, compute performances of GPUs (Graphics Processing Units) dramatically increased, especially in comparison to those of CPUs (Central Processing Units). GPUs are nowadays the hardware of choice for scientific applications involving massive parallel operations, such as deep learning (DL) and Artificial Intelligence (AI) workflows. Large-scale computing infrastructures such as...

    Go to contribution page
  36. Enrico Bothmann (University of Goettingen)
    09/07/2022, 12:00
    Computing and Data handling
    Parallel Talk

    For more than a decade the current generation of CPU-based matrix element generators has provided hard scattering events with excellent flexibility and good efficiency.
    However, they are a bottleneck of current Monte Carlo event generator toolchains, and with the advent of the HL-LHC and more demanding precision requirements, faster matrix elements are needed, especially at intermediate to...

    Go to contribution page
  37. Mark Neubauer (University of Illinois at Urbana-Champaign)
    09/07/2022, 12:15
    Computing and Data handling
    Parallel Talk

    Extracting scientific results from high-energy collider data involves the comparison of data collected from the experiments with โ€œsyntheticโ€ data produced from computationally-intensive simulations. Comparisons of experimental data and predictions from simulations increasingly utilize machine learning (ML) methods to try to overcome these computational challenges and enhance the data analysis....

    Go to contribution page
  38. Davide Zuliani (Istituto Nazionale di Fisica Nucleare)
    09/07/2022, 14:30
    Computing and Data handling
    Parallel Talk

    Machine Learning algorithms are playing a fundamental role in solving High Energy Physics tasks. In particular, the classification of hadronic jets at the Large Hadron Collider is suited for such types of algorithms, and despite the great effort that has been put in place to tackle such a classification task, there is room for improvement. In this context, Quantum Machine Learning is a new...

    Go to contribution page
  39. Jorge Martรญnez de Lejarza (IFIC-Universitat de Valรจncia)
    09/07/2022, 14:45
    Computing and Data handling
    Parallel Talk

    Clustering is one of the most frequent problems in many domains, in particular, in particle physics where jet reconstruction is central in experimental analyses. Jet clustering at the CERN's Large Hadron Collider is computationally expensive and the difficulty of this task is expected to increase with the upcoming High-Luminosity LHC (HL-LHC).
    In this work, we study the case in which quantum...

    Go to contribution page
  40. Stefano Carrazza (Istituto Nazionale di Fisica Nucleare)
    09/07/2022, 15:00
    Computing and Data handling
    Parallel Talk

    We present Qibo, a new open-source framework for fast evaluation of quantum circuits and adiabatic evolution which takes full advantage of hardware accelerators, quantum hardware calibration and control, and large codebase of algorithms for applications in HEP and beyond. The growing interest in quantum computing and the recent developments of quantum hardware devices motivates the development...

    Go to contribution page
  41. Rosa Marรญa Sandรก Seoane (Instituto de Fรญsica Teรณrica UAM-CSIC)
    09/07/2022, 15:15
    Computing and Data handling
    Parallel Talk

    Machine-Learned Likelihood (MLL) is a method that combines the power of current machine-learning techniques to face high-dimensional data with the likelihood-based inference tests used in traditional analyses. MLL allows estimating the experimental sensitivity in terms of the statistical signal significance through a single parameter of interest, the signal strength. Here we extend the MLL...

    Go to contribution page
  42. Avik Roy (University of Illinois at Urbana-Champaign)
    09/07/2022, 15:30
    Computing and Data handling
    Parallel Talk

    Multivariate techniques and machine learning models have found numerous applications in High Energy Physics (HEP) research over many years. In recent times, AI models based on deep neural networks are becoming increasingly popular for many of these applications. However, neural networks are regarded as black boxes- because of their high degree of complexity it is often quite difficult to...

    Go to contribution page
  43. Alaettin Serhan Mete (Argonne National Laboratory (US))
    09/07/2022, 15:45
    Computing and Data handling
    Parallel Talk

    The ATLAS experiment extensively uses multi-process (MP) parallelism to maximize data-throughput especially in I/O intensive workflows, such as the production of Derived Analysis Object Data (DAOD). In this mode, worker processes are spawned at the end of job initialization, thereby sharing memory allocated thus far. Each worker then loops over a unique set of events and produces its own...

    Go to contribution page
  44. Maciej Szymaล„ski
    09/07/2022, 16:00
    Computing and Data handling
    Parallel Talk

    In the context of the LHCb upgrade for LHC Run 3, the experiment software builds and release infrastructure are being improved. In particular, we present the LHCb nightly builds pipelines which are modernized to provide a faster turnaround of the produced builds. The revamped system organizes tasks of checkouts of the sources, builds and tests of the projects in LHCb software stacks on...

    Go to contribution page
  45. Lucia Morganti (Istituto Nazionale di Fisica Nucleare)
    09/07/2022, 17:00
    Computing and Data handling
    Parallel Talk

    INFN CNAF is the National Center of INFN (National Institute for Nuclear Physics) for research and development in the ๏ฌeld of information technologies applied to high energies physics experiments.
    CNAF hosts the largest INFN data center which also includes WLCG Tier1 site (one of 13 around the world), providing resources, support and services needed for computing and data handling in the...

    Go to contribution page
  46. Chieh Lin
    09/07/2022, 17:15
    Computing and Data handling
    Parallel Talk

    KOTO is a dedicated experiment to search for the New Physics through the ultra-rare decay $K_L^0 \rightarrow \pi^0 \nu \bar{\nu}$. In 2023, the $K_L^0$ beam intensity will be increased to collect $K_L^0$ decays faster. An upgrade of the data-acquisition system is hence introduced, including the expansion of the data throughput and the third-level trigger decision at the PC farm. The University...

    Go to contribution page
  47. Giles Chatham Strong (Istituto Nazionale di Fisica Nucleare)
    09/07/2022, 17:30
    Computing and Data handling
    Parallel Talk

    The recent MODE whitepaper*, proposes an end-to-end differential pipeline for the optimisation of detector designs directly with respect to the end goal of the experiment, rather than intermediate proxy targets. The TomOpt python package is the first concrete endeavour in attempting to realise such a pipeline, and aims to allow the optimisation of detectors for the purpose of muon tomography...

    Go to contribution page
  48. Paolo Andreetto (Istituto Nazionale di Fisica Nucleare)
    09/07/2022, 17:45
    Computing and Data handling
    Parallel Talk

    Studies of physics and detector performance of a possible experiment at a Muon Collider are attracting a lot of interest in the High Energy Physics community. Projections show that high precision measurements are possible as well as large new physics discovery potential. However, the presence of a large beam-induced background (BIB), generated by the muon beams decay, poses new computing and...

    Go to contribution page
  49. Julia Manuela Silva (University of Birmingham)
    09/07/2022, 18:00
    Computing and Data handling
    Parallel Talk

    Background modelling is one of the main challenges of particle physics analyses at hadron colliders. Commonly employed strategies are the use of simulations based on Monte Carlo event generators or the use of parametric methods. However, sufficiently accurate simulations are not always available or may be computationally costly to produce in high statistics, leading to uncertainties that can...

    Go to contribution page
Building timetable...