The speed and fidelity of detector simulations in particle physics pose compelling questions about LHC analysis and future colliders. The sparse high-dimensional data combined with the required precision provide a challenging task for modern generative networks. We present solutions with different tradeoffs, including accurate and precise Conditional Flow Matching and faster coupling-based...
With the upcoming High-Luminosity Large Hadron Collider (HL-LHC) and the corresponding increase in collision rates and pile-up, a significant surge in data quantity and complexity is expected. In response, substantial R&D efforts in artificial intelligence (AI) and machine learning (ML) have been initiated by the community in recent years to develop faster and more efficient algorithms capable...
DArk Matter Particle Explorer (DAMPE) is a pioneering instrument launched in space in 2015, designed for precise cosmic ray measurements reaching unprecedented hundreds of TeV in energy. One of the key challenges with DAMPE lies in cosmic ray data analysis at such high energies. It has been shown recently that deep learning can boost the experiment precision in regression (particle...
Model misspecification analysis strategies, such as anomaly detection, model validation, and model comparison are a key component of scientific model development. Over the last few years, there has been a rapid rise in the use of simulation-based inference (SBI) techniques for Bayesian parameter estimation, applied to increasingly complex forward models. To move towards fully simulation-based...
The increasing volume of gamma-ray data from space-borne telescopes, like Fermi-LAT, and the upcoming ground-based telescopes, like the Cherenkov Telescope Array Observatory (CTAO), presents us with both opportunities and challenges. Traditional analysis methods based on likelihood analysis are often used for gamma-ray source detection and further characterization tasks. A key challenge to...
Projects such as the imminent Vera C. Rubin Observatory are critical tools for understanding cosmological questions like the nature of dark energy. By observing huge numbers of galaxies, they enable us to map the large scale structure of the Universe. However, this is only possible if we are able to accurately model our photometric observations of the galaxies, and thus infer their redshifts...
Upcoming galaxy surveys promise to greatly inform our models of the Universe’s composition and history. Leveraging this wealth of data requires simulations that are accurate and computationally efficient. While N-body simulations set the standard for precision, their computational cost makes them impractical for large-scale data analysis. In this talk, I will present a neural network-based...
OmniJet-alpha, released in 2024, is the first cross-task foundation model for particle physics, demonstrating transfer learning between an unsupervised problem (jet generation) and a classic supervised task (jet tagging). This talk will present current developments and expansions of the model. We will for example show how we are able to utilize real, unlabeled CMS data to pretrain the model....
This work explores the application of Reinforcement Learning (RL) to the control of a Fabry-Perot (FP) optical cavity, a key component in interferometric gravitational-wave detectors. By leveraging RL’s inherent ability to handle high-dimensional non-linear systems, the project aims to achieve robust and autonomous cavity locking—a process typically hindered by elevated finesse values, mirror...