- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
BOOST 2024 is the 16th conference of a series of successful joint theory/experiment workshops that bring together the world's leading experts from theory and LHC/RHIC experiments to discuss the latest progress and develop new approaches on the reconstruction of and use of jet substructure to study Quantum Chromodynamics (QCD) and look for physics beyond the Standard Model.
This year's edition is jointly organised by Dipartimento di Fisica, Università di Genova, and Istituto Nazionale di Fisica Nucleare (INFN) - Sezione di Genova. The conference will cover the following topics:
Previous editions:
Experimental uncertainties related to hadronic object reconstruction can limit the precision of physics analyses at the LHC, and so improvements in performance have the potential to broadly increase the impact of results. Recent refinements to reconstruction and calibration procedures for ATLAS jets and MET result in reduced uncertainties, improved pileup stability and other performance gains. In this contribution, highlights of these developments will be presented.
We present new developments in jet reconstruction and calibration for LHC Run3. A new regression approach for jet calibration is explored and pileup mitigation techniques are developed for joint reconstruction of hadronic taus and jets.
Hadronic object reconstruction is one of the most promising settings for cutting-edge machine learning and artificial intelligence algorithms at the LHC. In this contribution, highlights of ML/AI applications by ATLAS to particle and boosted-object identification, MET reconstruction and other tasks will be presented.
A fundamental aspect of CMS researches concerns the identification and characterisation of jets originating from quarks and gluons produced in high-energy proton-proton collisions. Electroweak scale resonances (Z/W bosons), Higgs bosons and top quarks are often produced with high Lorentz-boosts, where their products become highly collimated large and massive jets, usually reconstructed as AK8 jets. Therefore, the identification of the particle initiating the jet plays a crucial role in distinguishing between boosted top quarks and bosons from the QCD background. In this talk, an overview of the usage of boosted jet taggers within CMS will be given. It will highlight the most recent AK8 tagging algorithms, which make use of sophisticated machine learning techniques, optimised for performance and efficiency. Furthermore, the presentation will show the validation of ML-based taggers, developed for AK8 jets originating from boosted resonances decaying to b$\Bar{\text{b}}$, comparing CMS data and MC simulation.
Flavour-tagging is a critical component of the ATLAS experiment physics programme. Existing flavour tagging algorithms rely on several low-level taggers, which are a combination of physically informed algorithms and machine learning models. A novel approach presented here instead uses a single machine learning model based on reconstructed tracks, avoiding the need for low-level taggers based on secondary vertexing algorithms. This new approach reduces complexity and improves tagging performance. This model employs a transformer architecture to process information from a variable number of tracks and other objects in the jet in order to simultaneously predict the jets flavour, the partitioning of tracks into vertices, and the physical origin of each track. The inclusion of auxiliary tasks aids the models interpretability. The new approach significantly improves jet flavour identification performance compared to existing methods in both Monte-Carlo simulation and collision data. Notably, the versatility of the approach is demonstrated by its successful application in boosted Higgs tagging using large-R jets.
New searches with exotic jet substructure techniques from CMS are presented. Signatures with challenging reconstruction techniques include displaced jets, closely merge photon-pairs and soft unclustered energy patterns. New reconstruction techniques making use of machine learning and physics results are presented.
The ATLAS Level-1 Calorimeter (L1Calo) trigger is a custom-built hardware system that identifies events containing calorimeter-based physics objects, including electrons, photons, taus, jets, and missing transverse energy. The L1Calo system has been upgraded for Run 3 to respond to the challenging environment characterized by increasingly high luminosity and pileup conditions. As part of this upgrade, a new FPGA-based component called the global feature extractor (gFEX) has been introduced in the L1Calo trigger system. Its purpose is to identify patterns of energy related to the hadronic decays of high momentum Higgs, W & Z bosons, top quarks, and exotic particles in real-time at the LHC crossing rate. Specifically, gFEX provides the ATLAS trigger system with the ability to detect events containing large-radius jets for the first time at Level-1. The design and capabilities of the gFEX system will be discussed, along with a review of its physics performance in Run 3.
Reconstructing heavy particles from their observed decay products becomes complex when decay chains with many intermediate and final state particles are involved and requires solving ambiguities. Modern machine learning techniques offer new solutions to this task. We discuss new applications of machine-learning techniques for event-level particle reconstruction in CMS.
Machine learning has become an essential tool in jet physics. Due to their complex, high-dimensional nature, jets can be explored holistically by neural networks in ways that are not possible manually. However, innovations in all areas of jet physics are proceeding in parallel. We show that large machine learning models trained for a jet classification task can improve the accuracy, precision, or speed of all other jet physics tasks. This is demonstrated by training a large model on a particular multiclass classification task and then using the learned representation for a different classification task, for a dataset with a different (full) detector simulation, for jets from a different collision system ($pp$ versus $ep$), for generative models, for likelihood ratio estimation, and for anomaly detection. Our OmniLearn approach is thus a foundation model and is made publicly available for use in any area where state-of-the-art precision is required for analyses involving jets and their substructure.
Multi-head attention based Transformers have taken the world by storm, given their outstanding capacity of learning accurate representations of diverse types of data. Famous examples include Large Language Models, such as ChatGPT, and Vision Transformers, like BEiT, for image generation. In this talk, we take these major technological advancements to the realm of jet physics. By creating a discrete version of jet constituents, we let an Auto-regressive Transformer network learn the ‘language’ of jet substructures. We demonstrate that our Transformer model learns highly accurate representations of different types of jets, including precise predictions of their multiplicity, while providing explicit density estimation. Moreover, we show that the Transformer model can be used for a variety of tasks that involve both jet tagging and generation. Finally, we discuss how a pre-trained Transformer can be used as a baseline for fine-tuned models created for specific tasks for which data may be scarce.
The production of W/Z bosons in association with light or heavy flavor jets or hadrons at the LHC provides an important test of perturbative QCD. In this talk, measurements by the ATLAS experiment probing the charm and beauty content of the proton are presented. Inclusive and differential cross-sections of Z boson production with at least one c-jet, or one or two b-jets are measured for events in which the Z boson decays into a pair of electrons or muons. Predictions from several Monte Carlo generators based on next-to-leading-order (NLO) matrix elements interfaced with a parton-shower simulation, with different choices of flavour schemes for initial-state partons, are compared with the measured cross sections. Moreover, measurements of inclusive, differential cross sections for the production of missing transverse momentum plus jets are presented. Auxiliary measurements of the hadronic system recoiling against isolated leptons, and photons, are also made in the same phase space, and ratios are formed. The measurements are designed both to allow comparison to Standard Model predictions, and to be sensitive to potential extensions to the Standard Model, particularly those involving the production of Dark Matter particles.
LHCb is a spectrometer targeting the forward region of proton-proton collisions, focusing on a pseudo-rapidity range between 2 and 5. Due to its optimal reconstruction performance and its clean environment, LHCb is an excellent experiment to study jets and their substructure. In this contribution, the latest measurements on jets physics at LHCb are presented, with a focus on the latest techniques used to reconstruct and calibrate jets.
Various measurements related to the study of jet substructure in proton collisions at 13 TeV with the CMS experiment are presented. Jet substructure measurements sensitive to the strong coupling are presented, namely the primary Lund jet plane and the energy-energy correlated. The measurements are motivated by their sensitivity to the strong coupling and present interesting experimental properties. Further measurements characterise the phase space and radiation patterns in boosted hadronic W boson and top quark decays.
Jets, the collimated streams of hadrons resulting from the fragmentation of highly energetic quarks and gluons, are some of the most commonly observed radiation patterns in hadron collider experiments. The distribution of quantum chromodynamic (QCD) radiation within jets is determined by complex processes, the production of showers of quarks and gluons and their subsequent recombination into hadrons. Presented are measurements of non-perturbative track functions, as well as differential cross-section of Lund sub-jet multiplicities and measurements of the Lund Jet Plane in top quark pair production. Finally, the substructure of top-quark jets, using top quarks reconstructed with the anti-kt algorithm is highlighted. The results are compared to a large variety of parton shower models and tunes.
The Lund jet plane (LJP) is an observable introduced to better understand the radiation pattern of jets in terms of the jets-within-the-jets found with iterative Cambridge/Aachen declustering. The LJP is a two-dimensional representation of the phase space of $1\to 2$ branchings, where the logarithm of the relative transverse momentum ($k_t$) and the logarithm of the rapidity-azimuth distance ($\Delta$) of emissions with respect to their emitter are used for the vertical and horizontal axes. The primary LJP, the first triangular leaf of Lund diagrams, is well understood analytically, and measurements at the LHC show how it can be used to constrain parton showers and hadronization models in a factorized way. One can extend the exploration of the Lund jet tree by turning to the LJP of a primary emission, the secondary LJPs. Quark jet showers are strongly constrained in $e^+e^-$ collisions at LEP, whereas gluon jet showers are understood less well. If the primary emission is chosen judiciously, such that it corresponds to the first branching in the jet shower, one can constrain the modeling of gluon-initiated jet showers independently of the quark/gluon jet composition of the jet sample. In this talk, we discuss how one can use such a sample of gluon-rich jet radiation to constrain gluon-initiated parton showers in the secondary LJP. Because of the resilience to the quark/gluon jet fraction, other substructure observables calculated on the secondary LJP could be used for precision physics. These possibilities will also be discussed.
Particle physics has entered an era where high-precision calculations are required to compare theoretical predictions with experimental data. In this talk, I will describe a new method to compute the virtual contributions in $k_T$-factorization [1,2], called the auxiliary parton method. This method, which was already successfully applied at LO [3] to describe the forward-forward dijet correlations measured by the ATLAS collaboration for proton-proton and proton-lead collision [4], has been extended to the NLO to calculate the virtual [1] and the real corrections [2].
As I will explain, the formalism developed in [1] and [2] is a fundamental step to bridge the gap between the lowest order calculations and the NLO corrections in hybrid $k_T$-factorization, thus being relevant for a more precise description of the experimental data in the rich field of the so-called small-x physics, such as gluon luminosity saturation and forward jet production.
Affiliation: Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, 46071 Valencia, Spain
mail: alessandro.giachino@ific.uv.es
[1] One-loop gauge invariant amplitudes with a space-like gluon, E. Blanco et al. Nucl.Phys.B 995 (2023) 116322
[2] A new subtraction scheme at NLO exploiting the privilege of kT-factorization, A. Giachino et al., e-Print: 2312.02808 (submitted to JHEP)
[3] A. V. Hameren et al., Phys. Lett. B 795 (2019) 511-515
[4] M. Aaboud et al., (ATLAS Collaboration) Phys. Rev. C 100 (2019) 034903
Parton showers are immensely flexible tools that are currently undergoing significant development in terms of their logarithmic accuracy, first to next-to-leading logarithmic (NLL) and more recently towards next-to-NLL (NNLL) accuracy. These improvements should make them significantly more powerful tools for precision collider physics, including jet substructure studies. I will present recent developments within the PanGlobal family of parton showers which result in them achieving NNLL accuracy for event shapes in final state showers. I will then discuss progress towards including triple-collinear corrections, which are part of the path to general NNLL accuracy.
In this talk, we introduce energy-weighted observable correlations (EWOCs): generalizations of the energy-energy correlator (EEC) which use subjets to characterize a wide variety of correlations between collective degrees of freedom in high-energy particle collisions. EWOCs use subjets to produce a manifestly infrared and collinear safe extension of the EEC, which probes energy-weighted angular correlations between particles, to arbitrary energy-weighted correlations between subjets. For concreteness, we focus on the specific example of the mass EWOC -- an energy-weighted probe of the mass of subjet pairs. Motivated by recent proposals for the use of the EEC in determining the mass of the top quark, we show that the mass EWOC is an intuitive proxy for the masses of resonances which decay into pairs of energetic subjets produced in electron-positron and proton-proton collisions. As a proof of concept, we show that the mass EWOC outperforms the EEC in the extraction of the $W$ boson mass in samples of $W$ boson pair-production produced in $\texttt{Pythia 8.244}$, and is robust to non-perturbative effects.
In this talk, we will present a theoretical framework for studying heavy flavor jet substructure for dense QGP medium based on the factorised picture between vacuum-like and medium-induced radiations, based on arXiv:2312.15560 and ongoing works. We studied the $z_g$ distribution for heavy flavor, i.e. bottom and charm quark, jets propagating through the dense QCD medium. However, unlike the previous study in the BDMPS-Z framework, which takes $\omega\ll\omega_{c}$ limit, and leads to a simplified and factorised formula for the spectrum, we use the full expression. In the end, the expanding medium extension and some preliminary results will be introduced briefly.
Fragmentation of heavy quarks into heavy-flavoured hadrons receives both perturbative and non-perturbative contributions. We consider perturbative QCD corrections to heavy quark production in $e^+e^-$ collisions to next-to-next-to-leading order accuracy in QCD with next-to-next-to-leading-logarithmic resummation of quasi-collinear and soft emissions.
We study multiple matching schemes, and multiple regularisations of the soft resummation, and observe a significant dependence of the perturbative results on these ingredients, suggesting that NNLO+NNLL perturbative accuracy may not lead to real gains unless the interface with non-perturbative physics is properly analysed.
We confirm previous evidence that $D^{*+}$ experimental data from CLEO/BELLE and from LEP are not reconcilable with perturbative predictions employing standard DGLAP evolution.
We extract non-perturbative contributions from $e^+e^-$ experimental data for both $D$ and $B$ meson fragmentation. Such contributions can be used to predict heavy-quark fragmentation in other processes, e.g. DIS and proton-proton collisions.
We report progress on the Heavy-Flavor Non-Relativistic Evolution (HF-NRevo) setup, a novel methodology to address the quarkonium formation within the fragmentation approximation. Our analysis addresses the moderate to large transverse-momentum regime, where the production mechanism based on the leading-twist collinear fragmentation from a single parton is expected to prevail over the higher-twist emission, directly from the hard-scattering subprocess, of the constituent heavy-quark pair. We rely upon Non-Relativistic-QCD (NRQCD) next-to-leading calculations for all the parton fragmentation channels to vector ($J/\psi$ and $\Upsilon$) and pseudoscalar ($\eta_c$ and $\eta_b$) quarkonia, which we take as proxies for initial-scale inputs. Thus, a complete set of variable-flavor number-scheme fragmentation functions, named NRFF1.0, are built through standard DGLAP evolution. Statistical errors are assessed via a Monte Carlo, replica-like approach that also accounts for Missing Higher-Order Uncertainties (MHOUs). The link between the NRFF1.0 approach and the MCscales one will be discussed. As a prospect, the use of HF-NRevo to address the quarkonium-in-jet fragmentation will be highlighted.
We present a study on single heavy baryons' spectra and strong decay widths. The masses of single heavy baryons up to the D-wave are calculated within a constituent quark model, employing the three-quark and quark-diquark schemes. In this contribution, we discuss the possible assignment of the recently discovered $\Omega_c^0(3327)$, $\Lambda_b(6146)^0$, $\Lambda_b(6152)^0$, $\Xi_b(6327)^{0}$, and $\Xi_b(6333)^{0}$ as D-wave excited states in the charm and bottom sectors, respectively. Additionally, we discuss the $\Lambda_b (6070)^0$ assignment and why the presence or absence of the $\rho$-mode excitations in the experimental spectrum is the key to distinguishing between the quark-diquark and three-quark behaviors.
We present the first theoretical calculation of nonfactorizable charm-quark loop contributions to the $B_s\to \gamma\, l^+l^-$ amplitude. This contribution involves the $B$-meson three-particle Bethe-Salpeter amplitude, $\langle 0|\bar s(y)G_{\mu\nu}(x)b(0)|\bar B_s(p)\rangle$, for which we take into account constraints from analyticity and continuity. We calculate the relevant form factors, $H_{A,V}^{\rm NF}(k'^2,k^2)$, and provide convenient parametrizations of our results, applicable in the region below hadron resonances, $k'^2 < M_{J/\psi}^2$ and $k^2 < M_{\phi}^2$. We report that factorizable and nonfactorizable charm contributions to the $B_s\to\gamma\, l^+l^-$ amplitude have opposite signs. To compare the charm and the top contributions, the nonfactorizable charming loop contribution is expressed as a non-universal (i.e., dependent on the reaction) $q^2$-dependent correction $\Delta^{\rm NF}C_7(q^2)$ to the Wilson coefficient $C_7$. For the $B_s\to\gamma\, l^+l^-$ amplitude the correction is found to be positive, $\Delta^{\rm NF} C_7(q^2)/C_7 > 0$. Our numerical results for the form factors $H^{\rm NF}_i(k'^2,k^2)$ depend sizeably on the precise value of the parameter $\lambda_{B_s}$, and for a fixed value of $\lambda_{B_s}$, $H^{\rm NF}_i(k'^2,k^2)$ may be calculated with about 10% accuracy.
Identifying particles that form jets in the CMS detector is a crucial part of many physics analyses. It has generally proven quite difficult to infer the charge of the originating particles. In this poster, we demonstrate a novel method to discriminate between Lorentz-boosted W+, W-, and Z boson jets. In order to do so, we have designed a specialized Dynamic Graph Convolutional Neural Network (DGCNN), based on the ParticleNet framework, which is trained on dedicated Monte Carlo simulation samples. It treats the jet as a "particle cloud", and learns the intrinsic differences between different types of jets by exploiting low-level features of the particle constituents inside the jet. Utilizing this jet charge tagger, we were able to significantly enhance the discrimination power compared to traditional variable-based methods. The poster also explains a possible use case of such a tagger within a physics analysis at the CMS experiment: isolating same-sign WW events from opposite-sign WW, and WZ, events in vector boson scattering.
Most research in high-energy physics nowadays begins with data. In recent years, effectively managing an increasing volume of data has become crucial for most publications. This holds true not only for high-energy physics but also for a wide range of activities, including healthcare, economics, computing, and business.
Traditionally, researchers analyze data by writing extensive code in various programming languages, from Fortran to Python, consuming valuable time that could be better spent developing meaningful interpretations of the data.
Enter Rulex Platform software: a self-coding platform capable of visualizing and analyzing terabytes of data on a standard laptop, and streaming preprocessing and machine learning analysis through simple drag-and-drop operations in a workflow. This allows you to focus on the essence of your research while leaving the tedious coding tasks to software that automates the process for you.
The precise measurement of the jet energy and mass scales are a crucial input to many physics measurements that use the proton-proton collision data recorded by the ATLAS detector at the LHC. The energy determination of quark jets, which originate from bottom quarks, is challenging as, for example, these types of jets can contain leptonic heavy-flavour decays into a charged lepton and an unobservable neutrino. This document reports on a novel calibration technique for b-quark jets using the transformer architecture to estimate the true energy of the b-quark jets. Separate algorithms have been developed to estimate the b-quark momentum of jets clustered with a single b-hadron and the energy and mass of jets that contain two b-quarks. This poster also includes a discussion on how to estimate the jet energy resolution in data.
In our search using CMS data for low mass, boosted pseudoscalar (Ma<15GeV) decays to b-jet pairs and τ pairs predicted by the Two Higgs Doublet Model + Singlet (2HDM+S), we find that the b-jets tend to merge when run through the anti-kT jet reconstruction algorithm with R parameter of 0.4 (AK4). Standard CMS b taggers are not optimized for this signal. We look to discriminate this topology by training a specialized Graphical Neural Net based on the ParticleNet framework. We use the standard particle network flow, stacking multiple blocks of the EdgeConv algorithm, but carefully tailor the network size, input parameters, and training our signal against our dominant backgrounds (tt and QCD). In doing this, we find significant improvement over the DeepFlavour and DeepCSV algorithms. This innovative approach provides excellent opportunities to exploit B meson triggered data and expand the scope of new physics searches.
Recently, the projected N-point energy correlators (ENCs) have seen a resurgence of interest for hadronic collisions at RHIC and the LHC to probe vacuum QCD. In this talk, we will show that the full three-point energy-energy-energy correlation (EEEC) function can be useful for studying the shape of energy flow within jets. In vacuum, it has been shown that these correlators elucidate the collinear singularity of vacuum QCD. For the first time, we will show how EEEC can uniquely characterize the energy flow originating from the jet-induced medium response in heavy-ion collisions. In heavy-ion collisions, jets formed from hard-scattered partons experience an overall energy loss and have a modified internal structure compared to vacuum jets. This is due to interactions between the energetic partons in a jet shower and the strongly coupled quark-gluon plasma (QGP). As the jet traverses the QGP, it loses momentum to the medium, which in turn responds to the presence of the jet. A quantitative description of this “medium response” is an area of active investigation. For this study, we utilize the Hybrid Model that implements a hydrodynamical medium response via the wake. We will show that measuring three-point correlation functions offer a promising experimental avenue for imaging this wake of the jet as when the three angles are well-separated the three-point correlator is dominated by the medium response.
We present a generic approach that deals with jet constituents to derive the jet energy scale (JES) uncertainty. It uses single-particle E/p response measurements obtained from 13 TeV Run 2 LHC data from proton-proton collisions. The E/p method offers a higher level of precision compared to the traditional pT-balance method, but, is in good agreement with it. Both methods are combined to derive the JES. The final output of this combination results in a significant improvement in JES uncertainty across a wide range of jet pT values. Join us as we unveil key insights and advancements in the precise determination of jet energy scales.
This research discusses topics in the field of Quantum CromoDynamics and high energy physics. We consider an electron-positron scattering process and introduce a so-called superinclusive observable, suggested to us by Giorgio Parisi. This observable allows one to study the energy flow of an event due to QCD final-state radiation. The aim of the research is to give a theoretical prediction of its behaviour in the collinear limit, showing its connection with the multifractal laws of statistical physics.
The identification of top quark decays, known as top tagging, is a crucial component in many measurements and searches at the Large Hadron Collider (LHC). Recently machine learning techniques have greatly improved the performance of top tagging algorithms. This poster presents the performance of several machine learning based jet tagging methods. In particular the performance of a Lund jet plane based tagger is compared to existing baselines. Then the systematic uncertainties in network performance are estimated through an approximate procedure that allows the size of the produced uncertainties to be quantified along with the raw performance. The most performant algorithms are found to produce the largest uncertainties, motivating the development of methods to reduce these uncertainties without compromising performance.
This poster presents the reconstruction of missing transverse momentum (pTmiss) in proton-proton collisions, in Run-2 and Run-3 data-taking at the ATLAS experiment. This is a challenging task involving many detector inputs, combining fully calibrated electrons, muons, photons, hadronically decaying τ-leptons, hadronic jets, and soft activity from remaining tracks. Several pTmiss 'working points' are defined with varying stringency of selections, which balance improving resolution or bias for both Run-2 and Run-3. The pTmiss performance is evaluated using data and Monte Carlo simulation, primarily using events consistent with leptonic Z-decays. Finally, methods used to calculate systematic uncertainties on the soft pTmiss component are presented, including recent progress on a novel approach to fully calibrate the soft term.
Identifying boosted hadronic top quarks is a major challenge in the CMS physics program, both in Standard Model measurements and searches for new phenomena. Many excellent tools are available to identify wide-angle jets with top quark flavor. However, the intermediate regime between resolved and highly boosted jets is poorly covered. In recent years, CMS has introduced HOTVR, a variable distance parameter jet clustering algorithm that can be used for top quark production at intermediate boosts. So far top identification on HOTVR was done in a cut-based approach with jet substructure variables. In this poster, the development and performance of a BDT for top quark tagging on HOTVR jets is showcased on data and simulation from the data-taking periods 2016-2018 and 2022 with the CMS experiment.
Pileup, or the presence of multiple independent proton-proton collisions within the same bunch-crossing, has been critical to the success of the LHC, allowing for the production of enormous proton-proton collision datasets. However, the typical LHC physics analysis only considers a single proton-proton collision in each bunch crossing; the remaining pileup collisions are viewed as an annoyance, adding noise to the physics process under study. By independently reconstructing these pileup collisions, it is possible to access an enormous dataset of lower-energy hadronic physics processes, which we demonstrate using data recorded by the ATLAS Detector during Run 2 of the LHC. Comparisons to triggered alternatives confirm the ability to use pileup as an unbiased dataset. The potential benefits of using pileup for physics are shown through the evaluation of the jet energy resolution, derived from dijet asymmetry measurements, comparing single-jet-trigger-based and pileup-based datasets.
The substructure of bottom quark jets is of substantial interest both in terms of understanding radiation emitted from heavy quarks, where mass effects are important, as well as in the study of decays of massive (known and sought) particles into heavy quarks. Unfortunately, the decays of b hadrons, which are typically cascading, obscure the parton level branching, by filling the radiative dead cone. To circumvent this, one may study exclusive b-hadron decays, but one then sacrifices the vast majority of the b-jet cross section. We have implemented a technique to partially reconstruct the b-hadrons by aggregating their charged hadron decay products. We show that for common substructure variables like the groomed soft-drop radius, the sensitivity to the underlying parton splitting is vastly improved.
In this talk, we discuss hadronic jets that are tagged as heavy-flavoured, i.e. they contain either beauty or charm. In particular, we consider heavy-flavour jets that have been groomed with the Soft Drop algorithm. In order to achieve a deeper understanding of these objects, we apply resummed perturbation theory to jets initiated by a massive quark and we
perform analytic calculations for two variables that characterize Soft Drop jets, namely the opening angle and the momentum fraction of the splitting that passes Soft Drop. We compare our findings to Monte Carlo simulations. Furthermore, we investigate the correlation between the Soft Drop energy fraction and alternative observables that aim to probe heavy-quark fragmentation functions. Finally, we discuss recent fixed-order calculations with fragmentation functions for the $Z+h^{\pm}$ and the $W+D$ processes withing the NNLOJET framework.
In this talk I will present our research activity on jets substructure with heavy flavour. Our primary goal is to obtain a more profound understanding of these objects. To this end, we employ resumed perturbation theory tailored specifically for jets initiated by heavy quarks. Furthermore, we will present analytical calculations targeting various observables that characterise jets, including angularities, energy correlation functions and Soft Drop variables. I will display the main differences between our findings and the massless computations. Finally, I will compare our analytical results with Monte Carlo simulations.
Understanding the behaviour of heavy quarks is important for painting a coherent picture of QCD, both formally and phenomenologically, and the upcoming runs at the LHC will provide unprecedented statistics for precision measurements related to heavy flavor. A natural object for initiating these studies are Energy Correlators, which measure correlations of energy flow at collider experiments. These observables fall into a broader class of so called “jet substructure” observables which have been successful in broadening our understanding of fundamental physics and QCD. The aformentioned correlators are distinguished in their ability to resolve scales associated with heavy quarks along with those of confinement. In this talk, I will introduce a variety of new correlator based observables, specifically the two and three point heavy energy correlators. These observables provide new insights into jet substructure, specifically allowing for direct access to hadronization and intrinsic mass effects before confinement. This opens the door to a new class of precision, heavy flavored based measurements at the LHC and beyond.
The development of iterative declustering techniques has brought the ability to reconstruct the jet tree and access the building blocks of the QCD parton shower. The iterative declustering of an angular-ordered jet allows to access the kinematic properties and mass effects at the level of each individual emission. In order to expose mass effects in heavy flavor-tagged jets, we study the splittings selected by two different grooming algorithms. We measure the splittings selected by newly proposed late-kt algorithm, that is designed to select collinear and perturbative splittings. In addition, the splittings given by a modified version of the SoftDrop algorithm that also requires the selected splitting to satisfy a perturbative kt cut, are studied. The splittings selected with these two algorithms populate different regions of the Lund jet plane, and we show how differently sensitive to the charm quark mass, gluon splitting to charm quark-antiquark pairs, and hadronization effects they are. The comparison of the results for the two algorithms exposes a modification of the structure in D-jets relative to inclusive jets due to the charm quark mass in a regime of high jet pT. The measurement of the substructure of D-jets and inclusive jets is performed using data collected with the CMS experiment, in proton-proton collisions at a center-of-mass energy of 5.02 TeV.
Several physics scenarios beyond the Standard Model predict the existence of new particles that can subsequently decay into a pair of Higgs bosons. These include pairs of SM-like Higgs bosons (HH) as well as asymmetric decays into two scalars of different masses (SH). For sufficiently high masses, the scalar S and the Higgs boson are Lorentz-boosted, thus the decay products are produced collimated. In the case where the Higgs bosons (or the scalar S) decay into a pair of bottom quarks, they can be reconstructed and identified inside a large radius jet. In this talk, the latest boosted resonant HH/SH-->4b searches by the ATLAS experiment are reported, focusing on results using LHC Run 2 data. The experimental techniques used for the boosted H-->bb tagging, and their impact to the analyses sensitivities, are also discussed.
Search channels including at least one Higgs boson plus another particle have formed an important part of the program of new physics searches. In this talk, the status of these searches by the CMS Collaboration is reviewed. Searches are discussed for resonances decaying to two Higgs bosons, a Higgs and a vector boson, or a Higgs boson and another new resonance, with proton-proton collision data collected at sqrt(s) = 13 TeV in the years 2016-2018. A combination of the results of these searches is presented together with constraints on different beyond-the-standard model scenarios, including scenarios with extended Higgs sectors, heavy vector bosons and extra dimensions. Studies are shown for the first time by CMS on the validity of the narrow-width approximation in searches for the resonant production of a pair of Higgs bosons. The potential for a discovery at the High Luminosity LHC is also discussed.
We present a phenomenology study probing the Supersymmetric Standard Model (SSM) at the Large Hadron Collider for a previously unexplored region of the parameter space.
In particular, we consider proton-proton collisions at $\sqrt{s}=13$ and $\sqrt{s}=14$ TeV and investigate the production of GeV-scale first and second-generation neutralinos $\widetilde{\chi}_{1}^{0}$ and $\widetilde{\chi}_{2}^{0}$, and first-generation charginos $\widetilde{\chi}_1^{\pm}$. This is done by employing a novel $pp \to \text{ewkino}~\text{ewkino}~jj $ vector boson fusion (VBF) topology. The analysis is performed using machine learning algorithms i.e. gradient boosting and deep learning methods, over traditional methods, to maximize the signal sensitivity with integrated luminosities of of $150, 300$, and $3000$ fb$^{-1}$.
We expect that our methodology extends LHC constraints to the SSM with $\ge 5\sigma$ signal significance throughout this parameter space, traditionally considered difficult to probe due to SM backgrounds and small SSM cross sections.
We present results from recent searches for resonances with enhanced couplings to top quarks or W bosons, collected with the CMS detector at a center-of-mass energy of 13 TeV. The analyses presented rely on state-of-the-art boosted-object identification techniques to reconstruct hadronic and leptonic top quark and W boson decays, targeting various signatures from single and pair production of new heavy resonances motivated by different BSM models.
Various searches for new resonances using unsupervised machine learning for anomaly detection are presented. These searches look at two-body invariant masses including leptons, at a heavy resonance Y decaying into a Standard Model Higgs boson H and a new particle X in a fully hadronic final state, or at the masses of two jets.
A model-agnostic search for new physics in the dijet final state with the CMS experiment is presented. Other than the requirement of a narrow dijet resonance with a mass in the range of 1800-6000 GeV, minimal additional assumptions are placed on the signal hypothesis. Search regions are obtained by utilizing multivariate machine learning methods to select jets with anomalous substructure. A collection of complementary anomaly detection methods – based on unsupervised, weakly-supervised and semi-supervised algorithms – are used in order to maximize the sensitivity to unknown new physics signatures.
Many new-physics signatures at the LHC produce highly boosted particles, leading to close-by objects in the detector and necessitating jet substructure techniques to disentangle the hadronic decay products. This talk will illustrate the use of these techniques in recent ATLAS searches for heavy W' and Z' resonances in top-bottom and di-top final states, as well as in searches for vector-like quarks or dark matter. Additionally, an analysis searching for semi-visible jets, with a significant contribution to missing transverse momentum, is presented. This type of topologies can arise in strongly-interacting dark sectors.
The field of anomaly detection (AD) has been steadily gaining traction in high energy physics as a powerful tool in the search for physics beyond the standard model (BSM), reducing the reliance on exact modelling of specific signal hypotheses. Arguably the most commonly used architecture is some flavor of autoencoder (AE), a network trained to compress examples to a latent space and decompress them back to their original size. The use of AEs as anomaly detectors relies on the assumption that a model trained to efficiently compress and decompress the background, will fail to do so on anomalous data i.e., possible BSM signals. The reconstruction error of the AE will thus be higher for signals then backgrounds, allowing for discrimination. In practice, this assumption not always holds, and AEs exhibit a few important failure modes, such as complexity bias (only being able to tag events with a more involved correlation structure then the background), and out-of-distribution reconstruction (assigning low reconstruction error also to events far from the training data). Using the search for semivisible jets as a benchmark, we show how the normalized autoencoder (NAE) architecture addresses these shortcomings, drastically increasing the model's power to tag potential BSM signals. We further propose a modified version of the NAE, based on the Wasserstein distance, that further improves the robustness of the method.
Approximately one-fourth of the energy density of the known Universe is attributed to Dark Matter (DM), the nature of which remains enigmatic. If DM is made of particles, producing and studying them at the Large Hadron Collider may be possible. A promising method to achieve it is to consider a monojet channel, in which at least one hard jet recoils against a missing transverse momentum, and there are no isolated leptons.
This study presents a novel approach to discovering Dark Matter at the Large Hadron Collider with Graph Neural Networks (GNNs). Contrary to the traditional analyses relying on hand-picked high-level variables, GNNs allow to capture the underlying spatial and topological features of the event, leading to enhanced discrimination between signal and background processes.
We demonstrate the utility of our approach for a scenario, where Dark Matter candidates are wino-like and higgsino-like neutralinos. We present the limits on DM particle masses that could be obtained by the end of the Run 3 and High Luminosity phases and we discuss the benefit of including different production processes. Finally, we interpret the Neural Network in an attempt to understand the connection between its output and the physical properties of the underlying events.
We present DarkCLR, a novel framework for detecting semi-visible jets at the LHC. DarkCLR uses a self-supervised contrastive-learning approach to create observables that are approximately invariant under relevant transformations. We use background-enhanced data to create a sensitive representation and evaluate the representations using a normalized autoencoder as a density estimator. Our results show a remarkable sensitivity for a wide range of semi-visible jets and are more robust than a supervised classifier trained on a specific signal.
The search for heavy resonances beyond the Standard Model (BSM) is a key objective at the LHC. While the recent use of advanced deep neural networks for boosted-jet tagging significantly enhances the sensitivity of dedicated searches, it is limited to specific final states, leaving vast potential BSM phase space underexplored. In this talk, we introduce a novel experimental method, Signature-Oriented Pre-training for Heavy-resonance ObservatioN (Sophon), which leverages deep learning to cover an extensive number of boosted final states. Pre-trained on the comprehensive JetClass-II dataset, the Sophon model learns intricate jet signatures, ensuring the optimal constructions of various jet tagging discriminates and enabling high-performance transfer learning capabilities. We show that the method can not only push widespread model-specific searches to their sensitivity frontier, but also greatly improve model-agnostic approaches, accelerating LHC resonance searches in a broad sense.
This talk is based on arXiv:2405.12972.
Attention-based transformer models have become increasingly prevalent in collider analysis, offering enhanced performance for tasks such as jet tagging. However, they are computationally intensive and require substantial data for training. In this paper, we introduce a new jet classification network using an MLP mixer, where two subsequent MLP operations serve to transform particle and feature tokens over the jet constituents. The transformed particles are combined with subjet information using multi-head cross-attention so that the network is invariant under the permutation of the jet constituents.
We utilize two clustering algorithms to identify subjets: the standard sequential recombination algorithms with fixed radius parameters and a new IRC-safe, density-based algorithm of dynamic radii based on HDBSCAN. The proposed network demonstrates comparable classification performance to state-of-the-art models while boosting computational efficiency drastically. Finally, we evaluate the network performance using various interpretable methods, including centred kernel alignment and attention maps, to highlight network efficacy in collider analysis tasks.
The likelihood-ratio test can be used to perform a goodness-of-fit test between a reference model and observations if the alternative hypothesis is selected from data by exploring a rich parametrised family of functions. The New Physics Learning Machine (NPLM) methodology has been developed as a concrete realisation of this idea, to perform model-independent searches at collider experiments. In this presentation, I will focus on a recent implementation based on kernel methods, which is extremely efficient and highly flexible (arXiv:2204.02317). I will present studies on new physics searches, data quality monitoring, and recent results on the evaluation of generative models.
The Energy Mover’s Distance (EMD) has seen use in collider physics as a metric between events and as a geometric method of defining infrared and collinear safe observables. Recently, the spectral Energy Mover’s Distance (SEMD) has been proposed as a more analytically tractable alternative to the EMD. In this work, we obtain a closed-form expression for the Riemannian-like p = 2 SEMD metric between events, eliminating the need to numerically solve an optimal transport problem. Additionally, we show how the SEMD can be used to define event and jet shape observables by minimizing the metric between event and parameterized energy flows (similar to the EMD), and we obtain closed-form expressions for several of these observables. We also present the SPECTER framework, an efficient and highly parallelized implementation of the SEMD metric and SEMD-derived shape observables. We demonstrate that the SEMD and SPECTER provide nearly thousand-fold compute time improvements over evaluation of the EMD.
To control the scheme of the Monte-Carlo (MC) top quark mass parameter several ingredients are mandatory, concerning the knowledge of the IR dynamics of the top mass sensitive observable, the MC parton shower and the MC hadronization evolution. I discuss these ingredients and their interplay for the simple case of 2-jettiness for boosted top production in electron-positron annihilation, where these ingredients are now all known for the HERWIG MC. Apart from having an at least NLL precise parton shower, which Herwig can provide for event-shapes, a crucial novel development is a QCD factorization compatible hadronization model, which I describe in some detail. The outcome is that for 2-jettiness the HERWIG top mass mass parameter represents a well defined and shower cut dependent renormalization scheme that can be defined NLO. The approach I discuss represents a blueprint for controlling the scheme of the MC top mass parameter that may eventually be also applied to direct-type top quark mass measurements.
In this contributions we will present new results that relate the top quark mass parameter in Monte Carlo generators with a field-theoretical mass scheme. In our study, Pythia8 predictions for the groomed top jet mass distribution in pp -> ttbar production are compared with first-principle calculations at NNLL accuracy. The formal accuracy is improved (from NLL to NNLL) with respect to previous results in proton-proton collisions. Soft-Drop grooming plays a key role in this analysis to reduce non-perturbative corrections; the grooming strategy is studied in detail and revised in comparison with previous results. A chi-squared minimization is used to determine the best-fit value for the pole mass and two parameters of the shape function describing non-perturbative effects in hadronization.
[This result is currently not yet published; the team aims for a publication on the time scale of the conference; a numerical result will be added to the abstract well before the conference].
Precision measurements of the top quark mass at hadron colliders have been notoriously difficult. Energy-energy correlators (EECs) provide clean access to angular correlations in the hadronic energy flux, but their application to the precision mass measurements is less direct since they measure a dimensionless angular scale.
Inspired by the use of standard candles in cosmology, I will show that a single EEC-based observable can be constructed that reflects the characteristic angular scales of both the $W$-boson and top quark masses. This gives direct access to the dimensionless quantity $m_t/m_W$, from which $m_t$ can be extracted in a well-defined short-distance scheme as a function of the well-known $m_W$ and a purely angular measurement. I will demonstrate several remarkable properties of this observable as well as its statistical feasibility and robustness for the LHC. This proposal provides a road map for a rich program for top mass determination at the LHC with record precision.
Detailed measurements of Higgs boson properties can be performed using highly boosted objects, where the detector signatures of two or more decay products overlap. The talk will present several ATLAS analyses targeting these topologies, using collision data collected during Run 2 of the LHC. The talk will present studies of the properties of Higgs boson production at high transverse momentum, where the Higgs boson and associated states such as a weak vector boson or a top quark-antiquark pair are reconstructed as boosted jets. The presentation will also highlight tests of the CP nature of Higgs boson interactions in these topologies. Finally, the talk will present searches for new high-mass Higgs-like resonances decaying into highly boosted Z bosons producing merged di-electron final states.
The large production cross section of top-quark pairs at the LHC allows for detailed studies of the substructure of jets arising from light quarks, b-quarks, and gluons. In this talk, recent measurements of the jet substructure in the decay products of top quarks performed by the ATLAS experiment are presented, using the reconstructed charged particles in the decay of W bosons and the fragmentation of b-quarks. One- and two-dimensional differential cross-sections for eight substructure variables, defined using only the charged components of the jets, as well as a measurement of the Lund plane are discussed. The observed substructure distributions are compared with several MC generator predictions using different phenomenological models for parton showering and hadronization.
Unfolded data can be used to measure the top mass, but also to search for unexpected kinematic correlations in top decay events. We show how generative unfolding can be used for both tasks and how the results benefit from the unbinned, high-dimensional unfolding. Our method includes an unbiasing step with respect to the top mass used during training data and promises significant advantages over standard methods,
in terms of flexibility and precision.
In the past decade, there have been significant developments in jet measurements. Initially, the emphasis was primarily on measuring the jet production cross-sections in vacuum and their modification in the Quark-Gluon Plasma (QGP) medium. The current investigations have shifted towards probing jet substructure, aiming to understand the intricate interplay of the perturbative and the non-perturbative regimes of QCD during jet evolution. The STAR experiment has been pivotal throughout in conducting these measurements across various collision systems, including $p+p$, $p+A$, and $A+A$, in an energy range complimentary to the LHC. New results have explored the transition between the parton shower and hadronization in jet evolution using correlation measurements in $p+p$. Baseline measurements from STAR for several generalized angularities for jets in vacuum have allowed us to study the modifications to parton showering and fragmentation in the presence of the QGP medium. Extensions of these measurements using charm-meson tagged jets have explored the flavor dependence of such in-medium modifications. In this talk, we will delve into these recent findings on jets and their substructure derived using novel experimental techniques employed within STAR. We will also briefly discuss some proposed future jet measurements on the high luminosity datasets STAR will be collecting until 2025.
Measuring jet substructure in heavy-ion collisions provides an opportunity to study detailed aspects of the dynamics of jet quenching in the hot and dense QCD medium created in these collisions. This talk presents a set of complementary ATLAS measurements of jet suppression and substructure performed using various jet definitions, constituents, and grooming techniques in Pb+Pb collisions. These measurements include small-radius calorimetry jets, charged tracks, and objects combining information from the tracker and calorimeter. Jet suppression is characterized using a nuclear modification factor, RAA, which compares jet yields in Pb+Pb and pp collisions at 5.02 TeV. The RAA is evaluated as a function of collision centrality, jet transverse momentum, and various observables that characterize jet substructure.
search for medium-induced jet transverse momentum broadening is performed with isolated photon-tagged jet events in proton-proton (pp) and lead-lead (PbPb) collisions at nucleon-nucleon center-of-mass energy $5.02\TeV$. The difference between jet axes as determined via energy-weight and winner-take-all clustering schemes, also known as the decorrelation of jet axes and denoted $\Delta j$, is measured for the first time in photon-tagged jet events. The pp and PbPb data samples were recorded with the CMS detector at the LHC and correspond to integrated luminosities of 1.69 nb$^{-1}$ and 302 pb$^{-1}$ respectively. Events are required to have a leading isolated photon with $60 < p_{T}^{\gamma} < 200 \GeVc$, which are correlated with anti-\kt $R = 0.3$ jets with $30 < p_{T}^{jet} < 100 \GeVc$ opposite in azimuthal angle. The PbPb results are reported as a function of collision centrality and compared to pp reference data. Jets with $p_{T}^{jet} < 60 \GeVc$ have consistent shape in PbPb relative to pp. However, jets with $p_{T}^{jet} > 60 \GeVc$ in central PbPb show signs of narrowing relative to pp. The results are compared to the \jewel and \pyquen theoretical models, which include different methods of energy loss.
This talk presents the first measurements of the groomed jet radius $R_{\mathrm{g}}$ and the jet girth $g$ in events with an isolated photon recoiling against a jet in lead-lead (PbPb) and proton-proton (pp) collisions at the LHC at a nucleon-nucleon center-of-mass energy of 5.02 TeV. The observables $R_{\mathrm{g}}$ and $g$ provide a quantitative measure of how narrow or broad a jet is. The analysis uses PbPb and pp data samples with integrated luminosities of 1.7~nb$^{-1}$ and 301~pb$^{-1}$, respectively, collected with the CMS experiment in 2018 and 2017. Events are required to have a photon with transverse momentum $p_{\mathrm{T}}^{\gamma} > 100$~GeV and at least one jet back-to-back in azimuth with respect to the photon and with transverse momentum $p_{\mathrm{T}}^{\text{jet}}$ such that $p_{\mathrm{T}}^{\text{jet}}/p_{\mathrm{T}}^{\gamma} > 0.4$. The measured $R_{\mathrm{g}}$ and $g$ distributions are unfolded to the particle level, which facilitates the comparison between the PbPb and pp results and with theoretical predictions. It is found that jets with $p_{\mathrm{T}}^{\text{jet}}/p_{\mathrm{T}}^{\gamma} > 0.8$, i.e, those that closely balance the photon $p_{\mathrm{T}}^{\gamma}$, are narrower in PbPb than in pp collisions. Relaxing the selection to include jets with $p_{\mathrm{T}}^{\text{jet}}/p_{\mathrm{T}}^{\gamma} > 0.4$ reduces the narrowing of the angular structure of jets in PbPb relative to the pp reference. This shows that selection bias effects associated with jet energy loss play an important role in the interpretation of jet substructure measurements.
The modifications imprinted on jets due to their interaction with QGP are assessed by comparing samples of jets produced in AA collisions and pp collisions. The standard procedure for doing so, however, ignores the effect of bin migration, i.e, it compares specific observables for jet populations at the same reconstructed jet transverse momentum ($p_T$). Since jet $p_T$ is itself modified by interaction with QGP, all such comparisons confound QGP induced modifications with changes that are simply a consequence of comparing jets that started out differently. Brewer et al. [1] introduced a quantile matching procedure that directly estimates average fractional jet energy loss ($Q_{AA}$) and can thus mitigate this $p_T$ migration effect.
In this work, we present an application of this procedure to establish that the difference between inclusive jet and $\gamma+$jet nuclear modification factors ($R_{AA}$) is dominated by differences in the spectral shape, leaving the colour charge of the jet initiating parton with a lesser role to play in this comparison. Furthermore, we study the evolution of $Q_{AA}$ with jet radius and conclude that fractional energy loss decreases with increasing jet radius when QGP response is accounted for.
We explore additional changes imprinted on the jet spectrum which are unrelated to the presence of QGP. Namely, we show that the isospin and nuclear PDF effects on the jet $p_T$ spectrum are quite sizeable for $\gamma+$jet events, confounding even further conclusions about QGP induced jet modifications. An attempt is made at suppressing such effects, thus maximizing the role of quenching as a differentiator between populations of AA and pp jets. Both $R_{AA}$ and $Q_{AA}$ sensitivity to these effects is studied.
Finally, we show the size of the $p_T$ migration correction for a number of observables and we present a detailed protocol of how the quantile procedure can be reliably used experimentally to improve existing observable measurements.
[1] Brewer, J., Milhano, J. G., & Thaler, J. (2019). Sorting out quenched jets. Physical Review Letters, 122(22), 222301.
It is known that perturbative simulations of high-multiplicity jets can vary quite substantially from data collected at high-energy colliders like the LHC. It is therefore important to understand what is driving the discrepancy and if there are other possible tools to simulate these events without relying on individual particle simulation. We propose that another observable to manage high-multiplicity jets is the global event shape, computed with the momenta of the particles but with additional IRC safety. We consider energy-mover-distance observables as well as similar tools developed for quark-gluon plasma studies.
Energy correlators, which as a jet-substructure observable measure correlations between energy detectors (calorimeters) in a collider experiment, have received significant attention over the last few years in both the theory/phenomenology and experimental communities. This success has prompted investigations into how energy correlators can be further used, such as in the study of both hot and cold nuclear matter, as well as to gain access to particles with particular quantum numbers. This requires “building” new detectors which are sensitive to more than just particle energy. In this talk, we will discuss this larger space of detectors, including some specific examples such as detectors which are sensitive to arbitrary powers of energy, as well as ones that are sensitive to a global U(1) charge. Beyond their construction, we will also discuss the renormalization of these objects and also highlight some ongoing experimental efforts which utilize these observables.
Abstract: The current best-performing networks in many ML for particle physics tasks are either custom-built Lorentz-equivariant architectures or more generic large transformer models. A major unanswered question is whether the high performance of equivariant architectures is in fact due to their equivariance. We design a study to isolate and investigate effects of equivariance on network performance. A particular equivariant model, PELICAN, has its symmetry broken down with no to minimal architectural changes via two methods. First, equivariance is broken “explicitly" by supplying model inputs that are equivariant under proper subgroups of the Lorentz group. In the second method, it is broken “implicitly" by adding spurious particles which encode laboratory-frame geometry. We compare its performance on common benchmark tasks in the equivariant and non-equivariant regimes.
We propose a new approach to learning powerful jet representations directly from unlabelled data. The method employs a Particle Transformer to predict masked particle representations in a latent space, overcoming the need for discrete tokenization and enabling it to extend to arbitrary input features beyond the Lorentz four-vectors. We demonstrate the effectiveness and flexibility of this method in several downstream tasks, including jet tagging and anomaly detection. Our approach provides a new path to a foundation model for particle physics.