- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
ICHEP is a series of international conferences organized by the C11 commission of the International Union of Pure and Applied Physics (IUPAP). It has been held every two years for more than 50 years, and is the reference conference of particle physics where most relevant results are presented. At ICHEP, physicists from around the world gather to share the latest advancements in particle physics, astrophysics/cosmology, and accelerator science and discuss plans for major future facilities.
Future studies with high-precision on fundamental interactions developed on lepton colliders require high-intensity and low-emittance positron sources. Such sources are needed for e+e- and also ฮผ+ฮผ- (generated with positrons) facilities. The availability of powerful positron sources is, therefore, very important. In this context, positron sources providing higher yields, better emittance and reliability than the SLC source are needed. Improvements in conventional positron sources using high-intensity incident electrons on thick metallic targets are meeting some limits due to the important energy deposited in the target with a high-energy deposition density associated to the required small beam sizes. Innovative solutions using the channeling radiation of electrons in axially aligned crystals are providing high photon yields which, in turn, can provide high positron production rate in an associated amorphous target. Such system composed by a crystal-radiator and an amorphous-converter is known as an hybrid positron source. For linear colliders, involving high incident electron intensities, a bending sweeping magnet put between the two targets to sweep off the charged particles created in the crystal, allows the mitigation of the deposited energy in the converter. For the circular e+e- colliders using more moderate intensities, the use of the sweeping magnet in the hybrid source can be omitted. Both options will be presented together with the simulations of the photon and positron production. In this framework, a study of the radiation emitted by a high-quality tungsten crystal using the DESY beamtest facility T21 will be presented and discussed.
Fermilab is considering several concepts for a future 2.4~MW upgrade for DUNE/LBNF, featuring linac extensions of the PIP-II linac and the construction of a new rapid-cycling-synchrotron and/or accumulation rings. This talk will summarize the relationship between these scenarios, emphasizing the commonalities and tracing the differences to their original design questions. In addition to a high-level summary of the two 2.4~MW upgrade scenarios, there is a brief discussion of staging, beamline capabilities, subsequent upgrades, and needed R&D.
The Muon g-2 Experiment at Fermilab has recently measured the muon magnetic anomaly with 460 parts per billion precision. This result is consistent with the measurement from the previous BNL experiment, and the combined Fermilab-BNL value deviates from the most recent Standard Model calculation provided by the Muon g-2 Theory Initiative at the level of 4.2 standard deviations. The muon anomaly is determined by measuring muon spin precession, relative to the muon momentum, inside of a ~7m-radius storage ring which has a very uniform and precisely measured magnetic field. It is necessary to quantify and correct for the effects from storage ring beam and spin dynamics, so as to achieve the required experimental precision for probing the Standard Model. This talk will give an overview of the beam dynamics corrections that are required for the Fermilab measurement.
Recently the Muon g-2 collaboration published the most precise measurement of the anomalous magnetic moment of the muon, $a_\mu$, with a 460 ppb uncertainty based on the Run 1 data. The measurement principle is based on a clock comparison between the anomalous spin precession frequency of spin-polarized muons and a high-precision measurement of the magnetic field environment using nuclear magnetic resonance (NMR) techniques, expressed by the (free) proton spin precession frequency. To achieve the ultimate goal of a 140 ppb uncertainty on $a_\mu$, the magnetic field in the storage region of the muons needs to be known with a total uncertainty of less than 70 ppb. Three devices are used to measure and calibrate the magnetic field in the Muon g-2 storage ring: (a) an absolute calibrated NMR probe, (b) a movable array of NMR probes that can be pulled through the storage region of the muons and (c) a set of NMR probes in the vicinity of the storage region. In this talk, we present the measurement and tracking principle of the magnetic field and point out improvements implemented for the analysis of the data recorded in Run 2 and Run 3.
The reduction of random motion in particle beams, known as beam cooling, has dramatically extended the science reach of many accelerator facilities, with applications ranging from high-energy colliders to the accumulation of antimatter for tests of CPT symmetry and gravity. One of the primary research frontiers in beam cooling is the realization of advanced cooling concepts that have system bandwidths of tens to hundreds of terahertz and achievable cooling rates that exceed the state of the art by up to four orders of magnitude. Here we describe the successful experimental validation of Optical Stochastic Cooling (OSC), which constitutes the first demonstration of any advanced cooling concept. This demonstration is part of a broader advanced beam-cooling research program at Fermilab that also includes high-energy electron cooling and future efforts in laser cooling of ions. The OSC method, first proposed nearly three decades ago, is derivative of S. van der Meerโs stochastic cooling (SC), which was instrumental in the discovery of the W and Z bosons at CERN and the top quark at Fermilab. In SC, a circulating beam is sampled and corrected (cooled) using microwave pickups and kickers with a bandwidth of a few GHz. OSC replaces these microwave elements with optical-frequency analogs, such as magnetic undulators and optical amplifiers, and uses each particleโs radiation to sense and correct its phase-space errors. The OSC experiment, which was carried out at Fermilabโs Integrable Optics Test Accelerator (IOTA), used 100-MeV electrons, a radiation wavelength of 950 nm and achieved a total damping rate approximately 8-times greater than the natural longitudinal damping rate due to synchrotron radiation. Coupling of the longitudinal and transverse planes enabled simultaneous cooling in all degrees of freedom. The integrated system demonstrated sub-femtosecond stability and a bandwidth of ~20 THz, a factor of ~2000-times higher than conventional microwave SC systems. Additionally, detailed experiments were performed demonstrating and characterizing OSC with a single particle in IOTA. This first demonstration of SC at optical frequencies serves as a foundation for more advanced experiments with high-gain optical amplification and advances opportunities for future operational OSC systems at colliders and other accelerator facilities.
The main goal of the Mu2e experiment at Fermilab is to search for indications of charged lepton flavor violation [1]. To achieve this goal, experimenters will be searching for the coherent neutrinoless conversion of a negative muon into an electron in the field of a nucleus by measuring 105-MeV electrons emitted in conversions of negative muons into electrons in the nuclear field of an Al target. This will allow Mu2e to probe effective new physics mass scales up to the $10^{3}โ10^{4}$ TeV range. One of the central elements of the Mu2e experimental facility is its target station, where negative pions are generated in interactions of the 8 GeV primary proton beam with a tungsten target, shaped similar to a rod, which will be capable of producing around $3.6\cdot 10^{20}$ stopped negative muons in three years of running [2]. The Mu2e experiment is planned to be extended to a next-generation experiment, Mu2e-II, with a single event sensitivity improved by a factor of 10 or more. Mu2e-II will probe new physics mass scales up to $10^{5}$ TeV by utilizing an 800-MeV 100-kW proton beam. This greater sensitivity is within reach by using the PIP-II accelerator upgrade, a 250-meter-long LINAC capable of accelerating a 2-mA proton beam to a kinetic energy of 800 MeV corresponding to 1.6 MW (the power not used by Mu2e-II will be directed to a neutrino experiment). The higher beam intensity would require a substantially more advanced target design. We are studying a novel conveyor target with tungsten or carbon spherical target elements moved through the beam path. The motion of the elements can be ensured either just mechanically or both mechanically and via He-gas flow. In this talk, we will discuss our recent advances in conceptual design R&D for a Mu2e-II target station based on energy deposition and radiation damage simulations. Our study involves Monte-Carlo codes (MARS15 [3], G4beamline [4], and FLUKA [5]) and thermal and mechanical ANSYS analyses to estimate the stability of the system. The concurrent use of the aforementioned simulation software is intended to allow us to determine and minimize the systematic uncertainty of the simulations. Our simulations allowed us to rule out some other designs (rotated and fixed granular targets) as less practical and supported our assessment of the new target stationโs required working parameters and constraints. The thermal and mechanical analyses we performed enabled us to determine the choice of cooling scheme and prospective materials for the conveyorโs spherical elements. We will discuss the first prototype of the Mu2e-II target and its mechanical tests performed at Fermilab that indicated the feasibility of the proposed design and its weaknesses, and we will suggest directions for its further improvement.
References
[1] Bartoszek L, Barnes L, Miller JP, Mott A, Palladino A, Quirk J, et al. Mu2e Technical Design Report. FERMILAB-TM-2594, FERMILAB-DESIGN-2014-01. arXiv:1501.05241 (2014).
[2] Bernstein R. The Mu2e Experiment, Front. in Phys. 7 (2019) 1.
[3] Mokhov NV, James CC. The MARS code System Userโs Guide, Version 15 (2016). Fermilab-FN-1058-APC [Preprint] 2017. Available from: https://mars.fnal.gov
[4] Roberts T. G4beamline Userโs Guide 3.06, (2018). Available from http://www.muonsinternal.com/muons3/g4beamline/G4beamlineUsersGuide.pdf
[5] Bรถhlen TT, Cerutti F, Chin MPW, et al. The FLUKA Code: Developments and challenges for high energy and medical applications. Nucl Data Sheets 2014; 120: 211โ214.
Beam extraction and collimation in particle accelerators using bent crystals as compact elements capable of efficiently steering particle beams has been investigated at several high-energy hadron accelerators, such as SPS and LHC (CERN, Geneva), Tevatron (Batavia, USA), U70 (Protvino, Russia). Due to technological limitations and a not sufficiently deep understanding of the physics at the base of the interactions between charged particle beams and crystals, this technique has never been applied to electron beams.
Recent innovative experiments carried out at SLAC (Stanford, USA) and MAMI (Mainz, Germany) has raised up the technological readiness level and the understanding of the processes of interaction between crystals and electron beams, highlighting the possibility to use bent crystals to extract electron beams from worldwide spread synchrotrons.
In this contribution we report the first design of a proof-of-principle experiment aiming to use bent crystals as elements to achieve the extraction of 6 GeV electrons circulating in the DESY II Booster Synchrotron. This would be possible exploiting the phenomena of โchannelingโ: particles of a beam which are channeled between atomic planes of a crystal are forced to travel between atomic axes or planes; mechanically bending of the crystal results in steering of the beam, with an effect equivalent to the one of a magnetic field of few hundred Tesla.
We investigated the experimental setup in detail, though in this report we will focus on its main aspects, such as the particle beam dynamics during the extraction process, the manufacturing and characterization of bent crystals and the detection of the extracted beam.
We conclude that, following a successful proof-of-principle experiment, this technique can be applied at many lepton accelerators existing in the world for nuclear and particle physics detectors and generic detector R&D, as well as in many projects in high-energy physics requiring fixed-target experiments including projects related to lepton colliders.
`Preheating' refers to non-perturbative particle production at the end of cosmic inflation. In many modern inflationary models, this process is predominantly or partly tachyonic, that is, proceeds through a tachyonic instability where the mass-squared of the inflaton field is negative. An example of such a model is Higgs inflation, where the Standard Model Higgs field is the inflaton, formulated in Palatini gravity. The violent dynamics of such a strong instability can lead to strong production of gravitational waves and supermassive dark matter. I discuss the phenomenology of such models and the related CMB predictions.
According to the current experimental data, the SM Higgs vacuum appears to be metastable due to the development of a second, lower ground state in the Higgs potential. Consequently, vacuum decay would induce the nucleation of true vacuum bubbles with catastrophic consequences for our false vacuum Universe. Since such an event would render our Universe incompatible with measurements, we are motivated to study possible stabilising mechanisms in the early universe. In our current investigation, we study the experimentally motivated metastability of the electroweak vacuum in the context of the observationally favoured model of Starobinsky inflation. Following the motivation and techniques from our first study (2011.037633), we wish to obtain similar constraints the Higgs-curvature coupling $\xi$, while treating Starobinsky inflation more rigorously. Thus, we are embedding the SM on the modified gravity scenario $R+R^2$, that introduces Starobinsky inflation naturally, with significant repercussions for the effective Higgs potential in the form of additional negative terms that destabilize the vacuum. Another important aspect lies in the definition for the end of inflation as bubble nucleation is prominent during its very last moments. Our results dictate stronger lower $\xi$-bounds that are very sensitive to the final moments of inflation.
In this talk, I will present a short overview of the connection between particle physics and phase transitions in the early and very early universe. I will then focus on phase transitions during inflation and present recent results on how to use the stochastic spectral expansion to perform phenomenology calculations. I will also talk about the interplay between the electroweak phase transition, new physics at the TeV-scale and experimental constraints.
Bubble nucleation is a key ingredient in a cosmological first order phase transition. The non-equilibrium bubble dynamics and the properties of the transition are controlled by the density perturbations in the hot plasma. We present, for the first time, the full solution of the linearized Boltzmann equation. Our approach, differently from the traditional one based on the fluid approximation, does not rely on any ansatz. We focus on the contributions arising from the top quark species coupled to the Higgs field during a first-order electroweak phase transition. Our results significantly differ from the ones obtained in the fluid approximation with sizeable differences for the friction acting on the bubble wall.
Extensions of the Higgs sector of the Standard Model allow for a rich cosmological history around the electroweak scale. We show that besides the possibility of strong first-order phase transitions, which have been thoroughly studied in the literature, also other important phenomena can occur, like the non-restoration of the electroweak symmetry or the existence of vacua in which the Universe becomes trapped, preventing a transition to the electroweak minimum. Focusing on the next-to-minimal two-Higgs-doublet model (N2HDM) of type II and taking into account the existing theoretical and experimental constraints, we identify the scenarios of electroweak symmetry non-restoration, vacuum trapping and first-order phase transition in the thermal history of the Universe. We analyze these phenomena and in particular their relation to each other, and discuss their connection to the predicted phenomenology of the N2HDM at the LHC. Our analysis demonstrates that the presence of a global electroweak minimum of the scalar potential at zero temperature does not guarantee that the corresponding N2HDM parameter space will be physically viable: the existence of a critical temperature at which the electroweak phase becomes the deepest minimum is not sufficient for a transition to take place, necessitating an analysis of the tunnelling probability to the electroweak minimum for a reliable prediction of the thermal history of the Universe.
We present a simple extension of the Standard Model with three right-handed neutrinos with an additional U(1$)_\text{F}$ abelian flavor symmetry, with a non standard leptonic charge $L_e-L_\mu-L_\tau$ for lepton doublets and arbitrary right-handed charges. We present a see-saw realization of such a scenario. The baryon asymmetry of the Universe is generated via thermal leptogenesis through CP-violating decays of the heavy sterile neutrinos. We present a detailed numerical solution of the relevant Boltzmann equations in two different scenarios: three quasi-degenerate heavy Majorana neutrino masses and a hierarchical mass spectrum.
We study single field slow-roll inflation in the presence of $F(R)$ gravity in the Palatini formulation. In contrast to metric $F(R)$, when rewritten in terms of an auxiliary field and moved to the Einstein frame, Palatini $F(R)$ does not develop a new dynamical degree of freedom. However, it is not possible to solve analytically the constraint equation of the auxiliary field for a general $F(R)$. We propose a method that allows us to circumvent this issue and compute the inflationary observables. We apply this method to test scenarios of the form $F(R) = R + \alpha R^n$ and find that, as in the previously known $n=2$ case, a large $\alpha$ suppresses the tensor-to-scalar ratio $r$. We also find that models with $F(R)$ increasing faster than $R^2$ for large $R$ suffer from numerous problems, with possible implications on the theoretically allowed UV behaviour of such Palatini models. The talk is based on arXiv:2112.12149.
The $(g-2)_{\mu}$ anomaly is a longstanding problem in particle physics and many models are proposed to explain it. Leptoquark (LQ) models can be the solution to this anomaly because of the chiral enhancements. In this talk, we consider the models extended by the LQ and vector-like quark (VLQ) simultaneously. In the minimal LQ models, only the $R_2$ and $S_1$ representations can lead to the chiral enhancements. Here, we find one new $S_3$ solution to the anomaly in the presence of $(X,T,B)_{L,R}$ triplet. We also consider the one LQ and two VLQ extended models. Then, we propose new LQ search channels under the constraints of $(g-2)_{\mu}$. Besides the traditional $t\mu$ decay channel, the LQ can also decay into $T\mu$ final states, which will lead to the characteristic multi-top and multi-muon signals at hadron colliders.
In recent times, several hints of lepton flavour universality violation have been observed in semileptonic B decays, which point towards the existence of New Physics beyond the Standard Model. In this context, we consider a new variant of $U(1)_{L_{\mu}-L_{\tau}}$ gauge extension of Standard Model, containing three additional neutral fermions $N_{e}, N_{\mu}, N_{\tau}$, along with a $(\bar{3},1,1/3)$ scalar Leptoquark (SLQ) and an inert scalar doublet, to study the phenomenology of light dark matter, neutrino mass generation and flavour anomalies on a single platform. The lightest mass eigenstate of the $N_{\mu}, N_{\tau}$ neutral fermions plays the role of dark matter. The light gauge boson associated with $U(1)_{L_\mu-L_\tau}$ gauge group mediates dark to visible sector and helps to obtain the correct relic density. The spin-dependent WIMP-nucleon cross section is obtained in leptoquark portal and is looked up for consistency with CDMSlight bound. Further, we constrain the new model parameters by using the branching ratios of various $b \to sll$ and $b \to s \gamma$ decay processes as well as the lepton flavour non-universality observables $R_{K^{(*)}}$ and then show the implication on the branching ratios of some rare semileptonic $B \to (K^{(*)}, \phi)+$ missing energy processes. The light neutrino mass in this model framework can be generated at one-loop level through radiative mechanism.
I will briefly discuss the signatures and discovery prospects of several new physics models containing dark matter candidates at future lepton colliders. In particular, I will discuss the IDM as well as THDMa. Based on https://arxiv.org/abs/2203.07913
The Inert Doublet Model (IDM) is a simple extension of the Standard Model, introducing an additional Higgs doublet that brings in four new scalar particles. The lightest of the IDM scalars is stable and is a good candidate for a dark matter particle. The potential of discovering the IDM scalars in the experiment at the Compact Linear Collider (CLIC), an e$^+$e$^-$ collider proposed as the next generation infrastructure at CERN, has been tested for two high-energy running stages, at 1.5 TeV and 3 TeV centre-of-mass energy. The CLIC sensitivity to pair-production of the charged IDM scalars was studied using the full detector simulation for selected high-mass IDM benchmark scenarios and the semi-leptonic final state. To extrapolate the results to a wider range of IDM benchmark scenarios, the CLIC detector model in DELPHES was modified to take into account the $\gamma\gamma\to$ had. beam-induced background. Results of the study indicate that heavy charged IDM scalars can be discovered at CLIC for most of the considered benchmark scenarios, up to masses of the order of 1 TeV.
Many scenarios of physics beyond the Standard Model predict dark sectors containing new particles interacting only feebly with ordinary matter. Collider searches for these scenarios have largely focused on identifying signatures of new mediators, leaving much of the dark sector structure unexplored. We investigate the existence of a light dark-matter bound state, the darkonium, $\Upsilon_D$), predicted in minimal dark sector models, which can be produced through the reaction $e^+e^-\to \gamma\Upsilon_D$, with $\Upsilon_D\to AโAโAโ$ and the dark photons $Aโ$ decaying to pair of leptons or pions. This search explores new dark sector parameter space, illustrating the importance of $B$-factories in fully probing low-mass new physics. The results are based on the full data set of about 500 $\text{fb}^{-1}$ collected at the $\Upsilon(4S)$ resonance by the $BABAR$ detector at the PEP-II collider.
A resonant structure has been observed at ATOMKI in the invariant mass of electron-positron pairs, produced after excitation of nuclei such as $^8$Be and $^4$He by means of proton beams. Such a resonant structure can be interpreted as the production of an hypothetical particle (X17) whose mass is around 17 MeV.
The MEG-II experiment at the Paul Scherrer Institut whose primary physics goal is the search for the charged lepton violation process $\mu \rightarrow e \gamma$ is in the position to confirm and study this observation. MEG-II employs a source of protons able to accelerate them up to a kinetic energy of about 1 MeV. This protons are absorbed on a thin target where they excite nuclear transitions to produce photons for the Xenon calorimeter calibration of the MEG-II detector.
By using a new thinner target containing $Li$ atoms the $^7$Li(p,e$^+$e$^-$)$^8$Be process is being studied with a magnetic spectrometer including a cylindrical drift chamber and a system of fast scintillators. This aims to reach a better invariant resolution of previous experiments and to study the production of the X17.
A first dedicated data-taking period in 2022 was conducted where the first internal pair creation events were observed. We report about the first results of the study of the X17 particle.
In this talk, Iโll present results from a global fit of Dirac fermion dark matter (DM) effective field theory using the GAMBIT software. We include operators up to dimension-7 that describe the interactions between gauge-singlet Dirac fermion and Standard Model quarks, gluons, and the photon. Our fit includes the latest constraints from the Planck satellite, direct and indirect detection experiments, and the LHC. For DM mass below 100 GeV, we find that it is impossible to simultaneously satisfy all constraints while maintaining EFT validity at high energies. For higher masses, large regions of parameter space exist where EFT remains valid and reproduces the observed DM abundance.
The stability of particles in the cosmic soup is an important property as it governs their evolution in the cosmos, both on the perturbation and on the background level. In this work, we update the constraints on the decay rate of decaying cold dark matter (DCDM), particularly in the case when decay products are dark and massless or well within the relativistic limit. We further assume, as a base case, that all dark matter is ''decayable''. We then extend the analysis to include the scenario where only a fraction of dark matter can decay. We consider the latest dataset of Planck temperature and polarization measurement with lensing and BAO measurements from SDSS to put significantly tighter constraints on the decay rate, compared to previous work in the same direction.
Dark matter interactions with Standard Model particles can inject energy at early times, altering the standard evolution of the early universe. In particular, this energy injection can perturb the spectrum of the cosmic microwave background (CMB) away from that of a perfect blackbody.ย For this study, I will discuss recent work to update the DarkHistory code package to more carefully track interactions among low energy electrons, hydrogen atoms, and radiation, in order to accurately compute the evolution of the CMB spectral distortion in the presence of Dark Matter energy injection. I will show results forย the contribution to the spectral distortions from redshifts z < 3000 for arbitrary energy injection scenarios.
Relativistic protons and electrons in the extremely powerful jets of blazars may boost via elastic collisions the dark matter particles in the surroundings of the source to high energies. The blazar-boosted dark matter flux at Earth may be sizeable, larger than the flux associated with the analogous process of DM boosted by galactic cosmic rays, and relevant to access direct detection for dark matter particle masses lighter than 1 GeV both with target nuclei and/or electrons. From the null detection of a signal by XENON1T, MiniBooNE, and Borexino with nulcei (by Super-K with electrons) we have derived limits on dark matter-nucleus spin-independent and spin-dependent (dark-matter-electron) scattering cross sections which, depending on the modelization of the source, can improve on other currently available bounds for light DM candidates of one up to five orders of magnitude.
We consider the well-motivated scenario of dark matter annihilation with a velocity-dependent cross section. At higher speeds, dark matter annihilation may be either enhanced or suppressed, which affects the relative importance of targets like galactic subhalos, the Galactic Center, or extragalactic halos. We consider a variety of new strategies for determining the associated J-factors, and for extracting information about the velocity-dependence of the cross section from gamma-ray data, including the study of non-Poisson fluctuations in the photon count, and the use of likelihood-free inference.
The large gap between a galactic dark matter subhalo's velocity and its own gravitational binding velocity creates the situation that dark matter soft-scattering on baryons to evaporate the subhalo, if kinetic energy transfer is efficient by low momentum exchange. Small subhalos can evaporate before dark matter thermalize with baryons due to the low binding velocity. In case dark matter acquires an electromagnetic dipole moment, the survival of low-mass subhalos requires stringent limits on the photon-mediated soft scattering. We calculate the subhalo evaporation rate via soft collision by ionization gas and accelerated cosmic rays, and place an upper limit on the DM's electromagnetic form factor by assuming the survival of subhalos in the ionized Galactic interior. We also show that subhalos lighter than $10^{-5}M_{\odot}$ in the gaseous inner galactic region are subject to evaporation via dark matterโs effective electric and magnetic dipole moments below current direct detection limits.
We go beyond the state-of-the-art by combining first principle lattice results and effective field theory approach as Polyakov Loop model to explore the non-perturbative dark deconfinement-confinement phase transition and the generation of gravitational-waves in a pure gluon dark Yang-Mills theory. We further include fermions with different representations in the dark sector. Employing the Polyakov-Nambu-Jona-Lasinio (PNJL) model, we discover that the relevant gravitational wave signatures are highly dependent on the various representations. We also find a remarkable interplay between the deconfinement-confinement and chiral phase transitions. In both scenarios, the future Big Bang Observer experiment has a higher chance to detect the gravitational wave signals.
Physics in (canonical) quantum gravity needs to be manifestly diffeomorphism-invariant. Consequently, physical observables need to be formulated in terms of manifestly diffeomorphism-invariant operators, which are necessarily composite. This makes an evaluation in general involved, even if the concrete implementation of quantum gravity should be treatable (semi-)perturbatively in general.
A similar problem exists also in flat-space gauge theories, even at arbitrarily weak coupling. In such cases a mechanism developed by Frรถhlich, Morchio and Strocchi turns out to be highly successful in giving analytical access to the bound state properties. As will be shown, the conditions under which it can be applied are also satisfied by many quantum gravity theories. Its application will be illustrated by applying it to a canonical quantum gravity theory to determine the leading properties of curvature excitations and particles with and without spin.
The all-order structure of scattering amplitudes is greatly simplified by the use of (generalized) Wilson line operators, describing (subleading) soft emissions from straight lines extending to infinity. In this talk I will review how these techniques (originally developed for QCD phenomenology) can be naturally applied to gravitational scattering. At the quantum level, we find a convenient way to derive the exponentiation of the (subleading) graviton Reggeization. At the classical level, the formalism provides a powerful tool for the computation of observables relevant in the gravitational wave program.
The radion equilibrium in the Randall-Sundrum model is guaranteed by the backreaction of a bulk scalar field. We studied the radion dynamics in an extended scenario, where an intermediate brane exists in-between the UV and IR branes. We conducted an analysis in terms of the Einsteinโs equations and effective Lagrangian after applying the Goldberger-Wise mechanism. Our result elucidates that in the multibrane RS model, a unique radion field is conjectured as legitimate in the RS metric perturbation.
We present a method to obtain a scalar potential at tree level from a pure gauge theory on nilmanifolds, a class of negatively-curved compact spaces, and discuss the spontaneous symmetry breaking mechanism induced in the residual Minkowski space after compactification at low energy. We show that the scalar potential is completely determined by the gauge symmetries and the geometry of the compact manifold. In order to allow for simple analytic calculations we consider three extra space dimensions as the minimal example of a nilmanifold, therefore considering a pure Yang-Mills theory in seven dimensions. We further investigate the effective potential at one-loop and the spectrum when fermions are included.
While CP violation has not been observed so far in processes mediated by the strong force, the QCD Lagrangian admits a CP-odd topological term proportional to the so-called theta angle, which weighs the contributions to the partition function from different topological sectors. The observational bounds are usually interpreted as demanding a severe tuning of theta against the phases of the quark masses, which constitutes the strong CP problem. In this talk, we challenge this view and argue that when taking the correct 4d infinite volume limit the theta angle drops out of correlation functions, so that it becomes unobservable and the CP symmetry is preserved. We arrive at this result by either using instanton computations or by relying on general arguments based on the cluster decomposition principle and the index theorem.
ALICE is the experiment at the LHC specifically designed to study the properties of the quark-gluon plasma (QGP), a deconfined state of matter created in ultrarelativistic heavy-ion collisions. In this context, light-flavour particle production measurements play a key role, as they can probe statistical hadronization and partonic collectivity. Recent measurements in small collision systems (pp and p-Pb) highlighted a progressive onset of collective phenomena where charged-particle multiplicity is the driving quantity for all the considered observables. This evidence raised the question: what is the smallest hadronizing system which features collective-like phenomena? For this reason, small collision systems play a key role in the study of particle production in high-granular multiplicity intervals, going from low centre-of-mass energies to higher ones. In this contribution, final results on the production of light-flavour hadrons in pp collision at $\sqrt{s}$ = 5.02 TeV will be presented, extending at low multiplicity the observations reported in pp, p-Pb and A-A interactions. Final considerations will be discussed concerning the system-size dependence of charged-particle distributions in ultra-thin multiplicity intervals. Finally, a first look into the newest 900 GeV pp data sample collected in October 2021 will also be proposed to reach the lowest multiplicity ever probed at the LHC.
One of the main goals of the STAR experiment is to map the QCD phase diagram. The flow harmonics of azimuthal anisotropy ($v_{2}$ and $v_{3}$) of particles are sensitive to the initial dynamics of the medium. The first phase of RHIC Beam Energy Scan Phase-I (BES-I) program demands a precision measurement of $v_{2}$ and $v_{3}$ specifically for $\phi$ mesons and multi-strange hadrons in the low energy regimes.
STAR has recently finished the data taking for Beam Energy Scan Phase-II (BES-II) program with higher statistics, improved detector condition, and wider pseudorapidity coverage compared to what was available during BES-I program. In this talk, we will present the measurements of $v_{2}$ and $v_{3}$ of strange and multi-strange hadrons ($K_{S}^{0}$, $\Lambda (\bar{\Lambda})$, $\phi$, $\Xi^{-} (\bar{\Xi}^{+})$, and $\Omega^{-} (\bar{\Omega}^{+})$) at $\sqrt{s_{NN}}$ = 14.6 and 19.6 GeV. The centrality dependence, the number of constituent quark (NCQ) scaling, and baryon to anti-baryon difference in $v_{2}$ and $v_{3}$ will be presented. Finally, the physics implications of our measurements in the context of partonic collectivity will be discussed.
Strange and multi-strange hadrons have a small hadronic cross-section compared to light hadrons, making them an excellent probe for understanding the initial stages of relativistic heavy-ion collisions and dynamics of QCD matter. Isobar collisions, $^{96}_{44}$Ru+$^{96}_{44}$Ru and $^{96}_{40}$Zr+$^{96}_{40}$Zr, at $\sqrt{s_{\mathrm {NN}}}$ = 200 GeV have been performed at RHIC. These collisions are considered to be an effective way to minimize the flow-driven background contribution to search for the possibly small CME signal. The deformation parameters are different between the two species and flow measurements are highly sensitive to them. Elliptic flow measurements for these collisions also give direct information about the initial-state spatial anisotropies. The collected datasets include approximately two billion events for each of the isobar species and provide a unique opportunity for statistics hungry measurements such as flow coefficients of multi-strange hadrons.
In this talk, we will present the elliptic flow ($v_{2}$) of $K_{s}^{0}$, $\Lambda$, $\bar{\Lambda}$, $\phi$, $\Xi^{-}$, $\bar{\Xi}^{+}$, $\Omega^{-}$, and $\bar{\Omega}^{+}$ at mid-rapidity ($|y|$ $<$ 1.0) for Ru+Ru and Zr+Zr collisions at $\sqrt{s_{\mathrm {NN}}}$ = 200 GeV. The dependence of $v_{2}$ on centrality and transverse momentum ($p_{T}$) will be shown. The results will be compared with data from other collision systems like Cu+Cu, Au+Au, and U+U. The physics implications of such measurements in the context of nuclear deformation in isobars will be also discussed.
One of the key challenges of hadron physics today is understanding the origin of strangeness enhancement in high-energy hadronic collisions, i.e. the increase of (multi)strange hadron yields relative to non-strange hadron yields with increasing charged-particle multiplicity. In particular, what remains unclear is the relative contribution to this phenomenon from hard and soft QCD processes and the role of initial-state effects such as effective energy. The latter is the difference between the total centre-of-mass energy and the energy of leading baryons emitted at forward/backward rapidities. The superior tracking and particle-identification capabilities of ALICE make this detector unique in measuring (multi)strange hadrons via the reconstruction of their weak decays over a wide momentum range. The effective energy is measured using zero-degree hadronic calorimeters (ZDC).
In this talk, recent results on K$^0_S$ and $\Xi$ production in- and out-of-jets in pp collisions at $\sqrt{s}$=13 TeV using the two-particle correlation method are presented. To address the role of initial and final state effects, a double differential measurement of (multi)strange hadron production as a function of multiplicity and effective energy is also presented. The results of these measurements are compared to expectations from state-of-the-art phenomenological models implemented in commonly used Monte Carlo event generators.
The LHCb spectrometer has the unique capability to function as a fixed-target experiment by injecting gas into the LHC beampipe while proton or ion beams are circulating. The resulting beam+gas collisions cover an unexplored energy range that is above previous fixed-target experiments, but below the top RHIC energy for AA collisions. Here we present new results on antiproton and charm production from pHe, pNe, and PbNe fixed-target collisions at LHCb. Comparisons with various theoretical models of particle production and transport through the nucleus will be discussed.
The MoEDAL experiment deployed at IP8 on the LHC ring was the first dedicated search experiment to take data at the LHC in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, and massive slowly moving charged particles in p-p and heavy-ion collisions. We will report on our most result, recently reported in Nature, of our search for magnetic monopole production via Schwinger production.
Schwinger showed that electrically-charged particles can be produced in a strong electric field by quantum tunnelling through the Coulomb barrier. By electromagnetic duality, if magnetic monopoles (MMs) exist, they would be produced by the same mechanism in a sufficiently strong magnetic field. Unique advantages of the Schwinger mechanism are that its rate can be calculated using semiclassical techniques without relying on perturbation theory, unlike magnetic monopole production via the Drell-Yan mechanism. Also, importantly, production of non-pointlike magnetic monopoles is not exponentially suppressed by the finite size of the monopole.
Pb-Pb heavy-ion collisions at the LHC produce the strongest known magnetic
fields in the current Universe. This result is arguably the first at the LHC that relies directly on the unprecedented magnetic fields produced in heavy-ion collisions.
Very detailed measurements of Higgs boson properties and its interactions can be performed with the full Run 2 pp collision dataset collected at 13 TeV by using its decays into bosons, shining light over the electroweak symmetry breaking mechanism. This talk presents the latest measurements of the Higgs boson coupling properties by the ATLAS experiment in various bosonic decay channels, e. Results on production mode cross sections, Simplified Template Cross Sections, and their interpretations are presented. Specific scenarios of physics beyond the Standard Model are tested, as well as a generic extension in the framework of the Standard Model Effective Field Theory.
Thanks to the statistics of pp collision collected by the ATLAS experiment a 13 TeV a the LHC Run 2, detailed measurements of Higgs boson properties and its interactions can be performed using its decays into fermions, shining light over the properties of the Yukawa interactions. This talk presents the latest measurements of the Higgs boson properties by the ATLAS experiment in various fermionic decay channels, including Higgs production in association with top quarks, Simplified Template Cross Sections, and their interpretations. Specific scenarios of physics beyond the Standard Model are tested, as well as a generic extension in the framework of the Standard Model Effective Field Theory.
This talk will cover measurements of Higgs boson differential cross section measurements with fermionic decay channels and in ttH production, including fiducial differential cross-sections and STXS results.
This talk will cover measurements of Higgs boson differential cross section measurements with boson decay channels, including fiducial differential cross-sections and STXS results.
With the pp collision dataset collected at 13 TeV, detailed measurements of Higgs boson properties can be performed. The Higgs kinematic properties can be measured with increasing granularity, and interpreted to constrain beyond-the-Standard-Model phenomena. This talk presents the measurements of the Higgs boson fiducial and differential cross sections exploiting the Higgs decays into bosons, as well as their combination and interpretations.
The discovery of the Higgs boson ten years ago and successful measurement of the Higgs boson couplings to third generation fermions by ATLAS and CMS mark great milestones for HEP. The much weaker coupling to the second generation quarks predicted by the SM makes the measurement of the Higgs-charm coupling much more challenging. With the full run-2 data collected by the CMS experiment, a lot of progress has been made to constrain this coupling. In this talk, we present the latest results of direct and indirect measurements of the HIggs-charm coupling by the CMS experiment. Prospects for future improvements are also given.
With the full Run2 pp collision dataset collected at 13 TeV, the interactions of Higgs boson to third generation fermion have been established. For the understanding of Yukawa interaction mechanism, it is crucial to establish the interactions to second generation fermions. This talk presents the latest searches for the Higgs boson decays into second generation fermions, as well as for other Higgs rare decay modes, including the decays into quarkonia plus photon final states.
T2K is a long baseline neutrino oscillation experiment, which studies the oscillations of the neutrinos from a beam produced using the J-PARC accelerator. The beam neutrinos propagate over 295 km before reaching the Super-Kamiokande detector, where they can be detected after having oscillated. The ability of the experiment to run with an either neutrino or anti-neutrino beam makes it well suited to study the differences between the oscillations of neutrinos, in particular to look for a possible violation of CP symmetry in the lepton sector.
T2K has produced a new analysis of its first 10 years of data, with improved models to describe neutrino interactions and fluxes as well as additional samples for its near and far detector analyses. We will present the results on the measurement of the parameters describing neutrino oscillations obtained with this new analysis.
T2K is undergoing major upgrades with an improved beam power, an upgraded near detector and the loading of Super-Kamiokande with Gadolinium. The status of these upgrades and prospects for future T2K measurements will be discussed. In parallel, T2K has been working on joint analyses with other experiments, and we will give an update on the perspective of the 2 joint analyses in preparation, with respectively the Super-Kamiokande and the NOvA collaborations.
Neutrino oscillation physics has now entered the precision era. In parallel with needing larger detectors to collect more data with, future experiments further require a significant reduction of systematic uncertainties with respect to what is currently available. In the neutrino oscillation measurements from the T2K experiment the systematic uncertainties related to neutrino interaction cross sections are currently the most dominant. To reduce this uncertainty a much improved understanding of neutrino-nucleus interactions is required. In particular, it is crucial to better understand the nuclear effects which can alter the final state topology and kinematics of neutrino interactions in such a way which can bias neutrino energy reconstruction and therefore bias measurements of neutrino oscillations.
The upgraded ND280 detector, that will consist in a totally active Super-Fine-Grained-Detector (sFGD), two High Angle TPC (HA-TPC) and six TOF planes, will directly confront our naivety of neutrino interactions thanks to its full polar angle acceptance and a much lower proton tracking threshold. Furthermore, neutron tagging capabilities in addition to precision timing information will allow the upgraded detector to estimate neutron kinematics from neutrino interactions. Such improvements permit access to a much larger kinematic phase space which correspondingly allows techniques such as the analysis of transverse kinematic imbalances to offer remarkable constraints of the pertinent nuclear physics for T2K analyses.
The SuperFGD, a highly segmented scintillator detector, acting as a fully active target for the neutrino interactions, is a novel device with dimensions of ~2x1.8x0.6 m3 and a total mass of about 2 tons. It consists of about 2 million of small scintillator cubes each of 1 cm3. The signal readout from each cube is provided by wavelength shifting fibres connected to MPPCs. The total number of channels will be ~60,000 and the cubes have already been produced and assembled in ๐ฅโ๐ฆ layers.
The HA-TPC will be used for 3D track reconstruction, momentum measurement and particle identification. These TPC, with overall dimensions of 2x2x0.8 m3, will be equipped with 32 resistive MicroMegas (ERAM). The thin field cage (3 cm thickness, 4% rad. length) will be realized with laminated panels of Aramid and honeycomb covered with a kapton foil with copper strips. The 34x42 cm2 resistive bulk Micromegas will use a 500 kOhm/square DLC foil to spread the charge over the pad plane, each pad being ~1 cm2. The electronics is based on the AFTER chips.
The time-of-flight (TOF) will consist of 6 planes with about 5 m2 surface area surrounding the SuperFGD and the TPCs. Each plane has been assembled with 2.2 m long cast plastic scintillator bars with light collected by arrays of large-area MPPCs from two ends.
In this talk we will present the status of the construction of the different subdetectors towards their installation at J-PARC, expected for the first half of 2023 and we will describe the expected performances of this new detector.
NOvA is a long-baseline neutrino oscillation experiment with a beam and near detector at Fermilab and a far detector 810 km away in northern Minnesota. It features two functionally identical scintillator detectors. By measuring muon neutrino disappearance and electron neutrino appearance as a function of energy in both neutrinos and antineutrinos, NOvA can measure the parameters of the PMNS matrix which describe the known 3-flavor oscillations as well as constrain potential new physics which impacts neutrino oscillations. In this talk, we will present recent results from NOvA on both standard and non-standard oscillations.
The NOvA experiment is a long-baseline accelerator neutrino oscillation experiment. NOvA uses the upgraded NuMI beam from Fermilab and measures electron neutrino appearance and muon neutrino disappearance at its Far Detector in Ash River, Minnesota. NOvA is a pioneer in the neutrino community to use classification and regression convolutional neural networks with direct pixel map inputs for particle identification and energy reconstruction. NOvA is also developing new deep-learning techniques to improve interpretability, robustness, and performance for the next generation of analyses. In this talk, I will discuss the development of deep-learning-based reconstruction methods at NOvA.
T2K is a long baseline neutrino experiment producing a beam of muon neutrinos and antineutrinos at the Japan Particle Accelerator Research Centre (JPARC) and measuring their oscillation by comparing the measured neutrino rate and spectrum at a near detector complex, located at JPARC, and at the water-Cherenkov detector Super Kamiokande, located 295 Km away.
Such intense neutrino beam and the set of near and far detectors offer a unique opportunity to measure neutrino cross-sections for interactions on different nuclei (C and O primarily), for different neutrino energies and flavours. In particular, the combination of near detectors at different off-axis angles, enable an improved control on the energy-dependence of the neutrino cross-section. T2K is also pioneering new analysis techniques which target the exclusive measurement of the neutrino-interaction final state, including the kinematics of its hadronic part. An overview of the most recent T2K cross-section analyses will be presented, including a new measurement of coherent pion production in neutrino and antineutrino scattering on Carbon nuclei.
The scintillator-based near detector of the NOvA oscillation experiment sits in the NuMI neutrino beam, and thus has access to unprecedented neutrino scattering datasets. Thanks to the reversible focusing horns, large samples of both neutrino and antineutrino interactions have been recorded. Leveraging these datasets, NOvA can make a variety of double-differential cross-section measurements with world-leading statistical precision to constrain neutrino interaction models and inform oscillation experiments. In this talk we will present recent cross section results for both neutrinos and antineutrinos.
MINER$\nu$A is a neutrino-nucleus interaction experiment in the Neutrino Main Injector (NuMI) beam at Fermilab. With the $\langle E_{\nu}\rangle = 6\,\, \text{GeV}$ Medium Energy run complete and $12 \times 10^{20}$ protons on target delivered in neutrino and antineutrino mode, MINER$\nu$A combines a high statistics reach and the ability to make precise cross-section measurements in more than one dimensions. Analyses of plastic scintillator and nuclear target data constrain interaction models, providing feedback to neutrino event generators and driving down systematic uncertainties for future oscillation experiments. Specifically, MINER$\nu$A probes both the intrinsic neutrino scattering and the extrinsic nuclear effects which complicate the interactions. Generally, nuclear effects can be separated into initial- and final-state interactions, both of which are not known a priori to the precision needed for oscillation experiments. By fully exploiting the precisely measured final-state particles out of different target materials in the MINERvA detector, these effects can be accurately probed. In this talk, the newest MINER$\nu$A analyses since the last ICHEP, which encompass a broad physics range, will be presented: inclusive cross-section measurements in the tracker and in situ measurements of the delivered flux, allowing detailed comparisons with generator predictions, and control of systematic flux uncertainties, respectively. Moreover, by exploiting the significant statistics reach offered by the large exposure, MINER$\nu$A measures rare processes.
With proton-proton collisions about to restart at the Large Hadron Collider (LHC) the ATLAS detector will double the integrated luminosity the LHC accumulated in the ten previous years of operation. After this data-taking period the LHC will undergo an ambitious upgrade program to be able to deliver an instantaneous luminosity of $7.5\times 10^{34}$ cm$^{-2}$ s$^{-1}$ allowing to collect more than 3 ab$^{-1}$ of data at $\sqrt{s}=$14 TeV. This unprecedented data sample will allow ATLAS to perform several precision measurements to constrain the Standard Model Theory (SM) in yet unexplored phase-spaces, in particular in the Higgs sector, a phase-space only accessible at the LHC. The price to pay to be able to collect such a rich data-sample is to upgrade the detector to cope with the challenging experimental conditions that include huge levels of radiation and pile-up events about a factor 5 higher than in the present condition. The ATLAS upgrade comprises a completely new all-silicon tracker with extended rapidity coverage that will replace the current inner tracker detector; a redesigned trigger and data acquisition system for the calorimeters and muon systems allowing the implementation
of a free-running readout system. Finally, a new subsystem called High Granularity Timing Detector that will aid the track-vertex association in the forward region by incorporating timing information into the reconstructed tracks. A final ingredient, relevant to almost all measurements, is a precise determination of the delivered luminosity with systematic uncertainties below the percent level. This challenging task will be achieved by collecting the information from several detector systems using different and complementary techniques.
This presentation starting from the HL-LHC physics goals will describe the ATLAS detector upgrade status and the main results obtained with the prototypes, giving a synthetic, yet global, view of the whole upgrade project.
The increase of the particle flux at the HL-LHC with instantaneous luminosities up to L โ 7.5 ร 10^34 cm^โ2s^โ1 will have a severe impact on the ATLAS detector performance. The forward region where the liquid Argon calorimeter has coarser granularity and the inner tracker has poorer momentum resolution will be particularly affected. A High Granularity Timing Detector (HGTD) will be installed in front of the LAr end-cap calorimeters for pile-up mitigation and luminosity
measurement. The HGTD is a novel detector introduced to augment the new all-silicon Inner Tracker in the pseudo-rapidity range from 2.4 to 4.0, adding the capability to measure charged-particle trajectories in time and space. Two silicon-sensor double-sided layers will provide precision timing information for MIP particles with a resolution of 30 ps per track in order to assign each particle to the correct vertex. Readout cells have a size of 1.3 ร 1.3 mm2 leading to a highly granular detector with 3.7 million channels. Low Gain Avalanche Detectors technology has been chosen as it provides suitable gain to reach the large signal over noise ratio needed. Requirements and overall specifications of the HGTD will be presented as well as the technical design and the project status. The on-going R&D effort carried out to study the sensors, the readout ASIC, and the other components, supported by laboratory and test beam results, will also be presented.
The Upgrade II of the LHCb experiment is proposed for the long shutdown 4 of the LHC. The upgraded detector will operate at a maximum luminosity of $1.5x10^{34}$cm$^{-2}$s$^{-1}$, with the aim of integrating ~300 fb$^{-1}$ through the lifetime of the high-luminosity LHC (HL-LHC). The collected data will allow to fully exploit the flavour-physics opportunities of the HL-LHC, probing a wide range of physics observables with unprecedented accuracy. The accomplishment of this ambitious programme will require that the current detector performance is maintained at the maximum expected pile-up of ~40, and even improved in certain specific domains. To meet this challenge, it is foreseen to replace all of the existing spectrometer components to increase the granularity, reduce the amount of material in the detector and to exploit the use of new technologies including precision timing of the order of a few tens of picoseconds. In this talk the physics goals of the project will be reviewed, as well as the detector design and technology options which will allow to meet the desired specifications.
The Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC) is undergoing an extensive upgrade program to prepare for the challenging conditions of the High-Luminosity LHC (HL-LHC). A new timing detector in CMS will measure minimum ionizing particles (MIPs) with a time resolution of ~40-50 ps per hit and coverage up to |ฮท|=3. The precision time information from this MIP Timing Detector (MTD) will reduce the effects of the high levels of pileup expected at the HL-LHC and will bring new and unique capabilities to the CMS detector. The endcap region of the MTD, called the endcap timing layer (ETL), must endure high fluences, motivating the use of thin, radiation tolerant silicon sensors with fast charge collection. As such, the ETL will be instrumented with silicon low-gain avalanche diodes (LGADs), covering the high-radiation pseudo-rapidity region 1.6 < |ฮท| < 3.0. The LGADs will be read out with the ETROC readout chip, which is being designed for precision timing measurements. We will present the extensive developments and progress made for the ETL detector, from sensors to readout electronics, mechanical design, and plans for system testing. In addition, we will present test beam results, which demonstrate the desired time resolution.
The MIP Timing Detector (MTD) is a new sub-detector planned for the Compact Muon Solenoid (CMS) experiment at CERN, aimed at maintaining the excellent particle identification and reconstruction efficiency of the CMS detector during the High Luminosity LHC (HL-LHC) era. The MTD will provide new and unique capabilities to CMS by measuring the time-of-arrival of minimum ionizing particles with a resolution of 30-40 ps for MIP signals at a rate of 2.5 Mhit/s per channel at the beginning of HL-LHC operation. The precision time information provided by the MTD will reduce the effects of the high levels of pileup expected at the HL-LHC by enabling the use of 4D reconstruction algorithms. The central barrel timing layer (BTL) of the MTD uses a sensor technology consisting of LYSO:Ce scintillating crystal bars coupled to SiPMs, one at each end of the bar, read out with TOFHIR ASICs for the front-end. We present an overview of the MTD BTL design and show test beam results demonstrating the achievement of the target time resolution of about 30 ps.
The intriguing phenomena emerging in the high-density QCD matter are being widely studied in the heavy ion program at the LHC and will be understood more deeply during the high luminosity LHC (HL-LHC) era. The CMS experiment is under the Phase II upgrade towards the HL-LHC era. A new timing detector is proposed with timing resolution for minimum ionization particles (MIP) to be 30ps. The MIP timing detector (MTD) will provide the particle identification (PID) ability with a large acceptance covering up to $\eta<3$ through time-of-flight (TOF). Combining MTD with the other new sub-detectors, a tracker with acceptance $\eta<4$, high granularity calorimeters with acceptance covering $\eta<5$, will enable the deep studies of high-density QCD matters in ultra-relativistic heavy ion collisions. In this presentation, the performances of a broad range of measurements in heavy ion programs will be discussed using TOF-PID. These include the (3+1)D evolution of heavy flavor quarks, QGP medium response to high-$p_\mathrm{T}$ parton energy loss at wide jet cone angles, collectivity in small systems, fluctuations and transport of initially conserved charges, and light nuclei physics.
We report results of the branching fractions $\mathcal{B}(\bar{B}^0\to D^{*+}\pi^{-})$ and $\mathcal{B}(\bar{B}^0\to D^{*+}K^{-})$ measured using $772\times 10^{6}$ $B$-meson pairs recorded by the Belle experiment at the KEKB asymmetric-energy $e^{+}e^{-}$ collider. The measurements provide a precise test of QCD factorization. We also report the study of the branching fraction $B^+ \to D_s^{(*)}(\eta,K_S) / D^+(\eta,K_S)$ and the time-dependent CP asymmetry in $B \to \eta_c K_S$. The latter measurement provides information about $\sin{2\phi_1}$.
The latest studies of beauty meson decays to open charm final states from LHCb are presented. Several first observations and branching fraction measurements using Run 1 and Run 2 data samples are shown. These decay modes will provide important spectroscopy information and inputs to other analyses.
The tree-level determination of the CKM angle gamma is a standard candle measurement of CP violation in the Standard Model. The latest LHCb results from measurements of CP violation using beauty to open charm decays are presented. These include measurements using the full LHCb Run 1+2 data sample and the latest LHCb gamma & charm mixing combination.
The investigation of $B$-meson decays to charmed and charmless hadronic final states is a keystone of the Belle II physics program. It allows for theoretically reliable and experimentally precise constraints on the CKM Unitarity Triangle fit, and is sensitive to effects from non-SM physics. Results on branching ratios, direct CP-violating asymmetries, and polarization of various charmless B decays are presented, with particular emphasis on those for which Belle II will have unique sensitivity. New results from combined analyses of Belle and Belle II data to determine the CKM angle $\phi_3$ (or $\gamma$) are also presented. Perspectives on the precision achievable on the CKM angles and on the so called "$K\pi$ puzzleโ are also discussed.
The ATLAS experiment has performed measurements of B-meson rare decays proceeding via suppressed electroweak flavour changing neutral currents, and of mixing and CP violation in the neutral $B_s$ meson system. This talk will focus on the latest results from the ATLAS collaboration, such as rare processes $B^0_s \to \mu \mu$ and $B^0_d \to \mu \mu$, and $CP$ violation in $B^0_s \to J/\psi\ \phi$ decays. In the latter, the Standard Model predicts the $CP$ violating mixing phase, $\phi_s$, to be very small and its SM value is very well constrained, while in many new physics models large $\phi_s$ values are expected. The latest measurements of $\phi_s$ and several other parameters describing $B^0_s \to J/\psi\ \phi$ decays will be reported.
The presence of charmonium in the final states of B decays is a very clean experimental signature that allow the efficient collection of large samples of these decays. In addition, decays of beauty hadrons to final states with charmonium resonances proceed mainly through $b\to ccq$ tree-level transitions. The negligible penguin pollutions make these decays excellent probes of Standard Model quantities like the $B_0$ and $B_s$ mixing phases. In this work we present the most recent results of LHCb in the study of these decays.
The HERAPDF2.0 ensemble of parton distribution functions (PDFs) was introduced in 2015. The final stage is presented, a next-to-next-to-leading-order (NNLO) analysis of the HERA data on inclusive deep inelastic $ep$ scattering together with jet data as published by the H1 and ZEUS collaborations. A perturbative QCD fit, simultaneously of $\alpha_s(M_Z^2)$ and the PDFs, was performed with the result $\alpha_s(M_Z^2) = 0.1156 \pm 0.0011~{\rm (exp)}~ ^{+0.0001}_{-0.0002}~{\rm (model}$ ${\rm + parameterisation)}~ \pm 0.0029~{\rm (scale)}$. The PDF sets of HERAPDF2.0Jets NNLO were determined with separate fits using two fixed values of $\alpha_s(M_Z^2)$, $\alpha_s(M_Z^2)=0.1155$ and $0.118$, since the latter value was already chosen for the published HERAPDF2.0 NNLO analysis based on HERA inclusive DIS data only. The different sets of PDFs are presented, evaluated and compared. The consistency of the PDFs determined with and without the jet data demonstrates the consistency of HERA inclusive and jet-production cross-section data. The inclusion of the jet data reduced the uncertainty on the gluon PDF. Predictions based on the PDFs of HERAPDF2.0Jets NNLO give an excellent description of the jet-production data used as input.
We discuss recent developments related to the the latest release of the NNPDF family of global analyses of parton distribution functions: NNPDF4.0. This PDF set expands the NNPDF3.1 determination with 44 new datasets, mostly from the LHC. We derive a novel methodology through hyperparameter optimisation, leading to an efficient fitting algorithm built upon stochastic gradient descent. Theoretical improvements in the PDF description include a systematic implementation of positivity constraints and integrability of sum rules. We validate our methodology by means of closure tests and โfuture testsโ (i.e. tests of backward and forward data compatibility), and assess its stability, specifically upon changes of PDF parametrization basis. We compare NNPDF4.0 with its predecessor as well as other recent global fits, and study its phenomenological implications for representative collider observables. We discuss recent results of related studies building upon the open-source NNPDF framework.
We present fits to determine parton distribution functions (PDFs) using a diverse set of measurements from the ATLAS experiment at the LHC, including inclusive W and Z boson production, ttbar production, W+jets and Z+jets production, inclusive jet production and direct photon production. These ATLAS measurements are used in combination with deep-inelastic scattering data from HERA. Particular attention is paid to the correlation of systematic uncertainties within and between the various ATLAS data sets and to the impact of model, theoretical and parameterisation uncertainties.
With detector instrumented in the forward region, the collected Z boson events in the LHCb acceptance can be used to probe the proton structure in a phase space region not accessible by other LHC experiments. In this talk, the latest Z boson production measurements will be presented, as well as the measurement of Z+ c jet events for probing intrinsic charm. The potential contributions of the LHCb data to the global Parton Distribution Functions fits will be demonstrated via these analyses, including the sea quark in the larger x region, the transverse momentum dependent Parton Distribution Functions, and the intrinsic charm in the proton.
The QCD strong coupling (alpha_s) and the parton distribution functions (PDFs) of the proton are fundamental ingredients for phenomenology at high-energy facilities such as the Large Hadron Collider (LHC).
It is therefore of crucial importance to estimate any theoretical uncertainties associated to them.
Both alpha_s and PDFs obey their own renormalisation-group equations (RGEs) whose solution determines their scale evolution.
Although the kernels that govern these RGEs have been computed to very high perturbative precision, they are not exactly known.
In this contribution, we present a procedure that allows us to assess the uncertainty on the evolution of alpha_s and PDFs due to our imperfect knowledge of their respective evolution kernels.
Inspired by transverse-momentum and threshold resummation, we introduce additional scales, that we dubbed resummation scales, that can be varied to estimate the uncertainty on the evolution of alpha_s and PDFs at any scale.
As a test case, we consider inclusive deep-inelastic-scattering structure functions in a region relevant for the extraction of PDFs.
We study the effect of varying these resummation scales and compare it to the usual renormalisation and factorisation scale variations.
We present EKO and yadism, a new DGLAP evolution and DIS code respectively, able to provide PDF independent operators, for fast predictions evaluation.
They both support a wide range of physics and computational features, with a Python API to access the individual ingredients (e.g. strong coupling evolution), and file based output for a language agnostic consumption of the results. They are both interfaced with a third grid storage library, PineAPPL.
Both projects have been developed as open, modular, and extensible frameworks, encouraging community contributions and inspection.
A first application of the evolution code will be presented, unveiling the intrinsic charm content of the proton.
The LHeC and the FCC-he are the cleanest, high resolution microscopes that the world can build in the nearer future. Through a combination of neutral and charged currents and heavy quark tagging, they will unfold the parton structure of the proton with full flavour decomposition and unprecedented precision. In this talk we will present the most recent studies on the determination of proton parton densities. We will also present the results on the determination of the strong coupling constant through the measurement of total and jet cross sections. Finally, we will also comment on diffraction, both inclusive and exclusive, as a tool to get more differential information on the proton.
Reference: P. Agostini et al. (LHeC Study Group), The Large Hadron-Electron Collider at the HL-LHC, J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
Precision measurements of the production cross-sections of W/Z boson at LHC provide important tests of perturbative QCD and information about the parton distribution functions for quarks within the proton. This talk will present recent differential Z+jets results in extreme phase-spaces with high pT jets. The measurement is compared to the state-of-the-art NNLO theoretical predictions. If available, we will also present measurements of Z decays to a pair of leptons and a photon, which is a sensitive test of the kinematics of final-state QED radiation.
The large amount of data collected by the CMS experiment at the CERN LHC provides unprecedented opportunities to perform precision measurements of the standard model, which allow an accurate validation of the theory and might potentially reveal hints of new physics. Thanks to their leptonic decays, W and Z bosons guarantee a clean final state, and their relatively high production cross section permits the measurement of their properties with low systematic uncertainties and usually negligible statistical uncertainty.โจ This talk presents an overview of recent precision measurements of electroweak bosonsโ properties and cross sections, carried out by CMS using Run 2 data. In addition, prospects for future physics results expected from the High-Luminosity phase of the HLC, and fostered by the planned detector upgrade, are also discussed.
The LHC produces a vast sample of top quark pairs and single top quarks. Measurements of the inclusive top quark production rates at the LHC have reached a precision of several percent and test advanced Next-to-Next-to-Leading Order predictions in QCD. Differential measurements in several observables are important to test SM predictions and improve Monte Carlo generator predictions. In this contribution, comprehensive measurements of top-quark-antiquark pair and single-top-quark production are presented that use data recorded by the ATLAS experiment in the years 2015-2018 during Run 2 of the LHC. A recent result from the 5 TeV operation of the LHC is also included.
Recent measurements of top quark pair production inclusive and differential cross sections are presented using the data collected by the CMS detector. The differential cross sections are measured multi-differentially as a function of various kinematic observables of top quarks, jets, leptons of the event final state. Results are compared to precise theory calculations, among them also MINNLO+PS for the first time.
The LHCb experiment covers the forward region of proton-proton collisions, and it can improve the current electroweak landscape by studying W and Z bosons in this phase space complementary to ATLAS and CMS. Thanks to the excellent detector performance, fundamental parameters of the Standard Model can be precisely measured by studying the properties of the electroweak bosons. In this talk an overview of the wide LHCb electroweak measurement program will be presented. This include the measurement of the W boson mass and the measurement of $Z \rightarrow \mu^+ \mu^-$ angular coefficients.
The Precision Proton Spectrometer (PPS) is a subdetector of CMS introduced for the LHC Run 2, which provides a powerful tool for advancing BSM searches. The talk will discuss the key features of proton reconstruction (PPS alignment and optics calibrations), validation chain with physics data (using exclusive dilepton events), and finally the new results on exclusive diphoton, ttbar, Z+X, and diboson production explored with PPS will be presented, illustrating the unique sensitivity which can be achieved using proton tagging.
The Compact Linear Collider (CLIC) collaboration has presented a project implementation plan for construction of a 380 GeV e+e- linear collider 'Higgs and top factory' for the era beyond HL-LHC, that is also upgradable in stages to 3 TeV. The CLIC concept is based on high-gradient normal-conducting accelerating structures operating at X-band (12 GHz) frequency. Towards the next European Strategy Update a Project Readiness Report will be prepared, and the main studies towards this report will be presented.
We present the CLIC accelerator concept and the latest status of the project design and performance goals. Updated studies of the luminosity performance has allowed to consider increased luminosity for the 380 GeV stage. Studies are ongoing for further improvements.
We report on high-power tests of X-band structures using test facilities across the collaboration, as well as CLIC system verification studies and the technical development of key components of the accelerator. Key elements are the X-band components, and accelerator components important for nano beam performances.
We also present developments for application of the X-band technology to more compact accelerators for numerous applications, e.g. as X-ray FELs and in medicine. A rapidly increasing number of installations are taking the technology in use and provide important design, testing and verification opportunities, and motivate industrial developments.
Finally, the many efforts to make CLIC a sustainable and minimal power and energy consuming accelerator will be described. Design optimisation, RF power efficiency improvements and low power component development will provide a 380 GeV installation operating at around 50% of CERN's energy consumption today.
In the Superconducting rf Test Facility (STF) at High Energy Accelerator Research Organization (KEK), the cool-down tests of STF-2 cryomodules and the beam operations have been held since 2019.
STF-2 cryomodules are the same type as those for the International Linear Collider (ILC). As a result of beam operation so far, the averaged acceleration gradient of 9 cavities reached 33 MV/m, which satisfies the specification of the ILC (31.5 MV/m). This is an important milestone in demonstrating technology to realize the ILC.
Hence anomalous emittance growth after passing the accelerating cavities was seen on previous beam operation in April 2021, we observed inside the cavities by eye and confirmed there is no obstacle which was the source of this emittance growth. After checking almost same performance of accelerating cavities as those of the previous beam operation, we investigated various candidates that could cause this anomalous emittance growth in the beam operation.
For a long pulse (740us) and high current (5.8mA) beam same as those of the ILC specification, about 100 us long pulsed beam operation was demonstrated without loss. By implementing feedforward control to suppress the acceleration gradient drop due to beam loading, we could perform successful beam operation without loss. This is a powerful finding for beam operation with a pulse length equivalent to that of the ILC specification in STF-2.
We will present the outline of the cool-down test and the beam operation at STF.
The machine-detector interface (MDI) issues are one of the most complicated and challenging topics at the Circular Electron Positron Collider (CEPC). Comprehensive understandings of the MDI issues are decisive for achieving the optimal overall performance of the accelerator and detector. The CEPC machine will operate at different beam energies, from 45.5 GeV up to 120 GeV, with an instantons luminosity increasing from $3 ร 10^{34} cm^{โ2} s^{โ1}$ for the highest energy to $3.2ร10^{35} cm^{โ2} s^{โ1}$ or even higher for the lowest energy.
A flexible interaction region design will be plausible to allow for the large beam energy range. However, the design has to provide high luminosity that is desirable for physics studies, but keep the radiation backgrounds tolerable to the detectors. This requires a careful balance of the requirements from the accelerator and detector sides.
In this talk, the latest design of the CEPC MDI based on the current CEPC accelerator and detector design and parameters will be presented:
1. The design of the beam pipe will be presented, which would foresee several constraints: In the central region (z = ยฑ10 cm), it should be placed as close as possible to the interaction point and with minimal material budget to allow the precise determination of the track impact parameters. But it should still stay far away enough not to interfere with the beam backgrounds. The material and coolants must be carefully chosen based on the heat load calculation. In the forward region, the beam pipe must be made by proper materials to conduct away the deposited heat in the interaction region and shield the detectors from the beam backgrounds.
2. The estimation and mitigation of beam-induced backgrounds has been simulated and will be presented. A detailed simulation covering the main contributions from synchrotron radiation, pair production, and off-momentum beam particles has been performed. The suppering/mitigating schemes have also been studied.
3. The flexible layout of the CEPC IR and the engineering efforts for several key components like the position of LumiCal, the design of the Final Focusing system, and the Cryostat Chamber will be present.
We will also discuss our future plans towards the CEPC TDR.
The Future Circular electron-positron Collider, FCC-ee, is designed for unprecedented precision for particle physics experiments from the Z-pole to above the top pair threshold. This demands a precise knowledge of the center-of-mass energy (ECM) and collision boosts at all four interaction points and all operation energies. Determining the average beam energies is foreseen using resonant depolarization, with a precision better than 100 keV. This demands transversely polarized non-colliding pilot bunches. While wigglers are foreseen to improve the polarization time, misalignment and field errors can limit the achievable polarization level, and might alter the relationship between the resonant depolarization frequency and the beam energies. Strong synchrotron radiation losses from 40 MeV per turn at the Z-pole and up to 10 GeV per turn at the highest beam energy of 182.5 GeV lead to different ECM and boosts for each interaction point and the beamstrahlung enhances this asymmetry further. Other sources of energy shifts stem from collision offsets and should be controlled. A first evaluation was made in 2019 for the European Strategy. Further studies are ongoing in the framework of the Feasibility Study to be delivered in 2025. First promising results on energy calibration and polarization are presented here.
In the context of the FCC IS European study, which investigates the feasibility of a 100 km circular $e^{+}e^{-}$ collider for the future high energy physics research, we present the status of the High Energy Booster (HEB) ring for the proposed $e^{+}e^{-}$ option. The HEB is the ring accelerating the electrons and positrons up to the nominal energy before injection into the collider. In order to perform precision measurements of the Z, W and H bosons, as well as of the top quark, unprecedented luminosities are required. To reach this goal and to fill the collider, it is mandatory to continually top up inject the beams with a comparable emittance to the collider ones, and with a bunch charge variation below few $\%$.
The main challenges of the HEB are the rapid cycling time, allowing to reach the collider equilibrium emittance, and the minimum beam energy injected into the booster that allows a stable operation.
From the ring optics point of view, one of the issues is that the final energy in the booster depends on the collider physics case. One optimum optics for a given energy case may be different from another. For the low final energies (Z, W), the characteristics time to reach the equilibrium emittance may be greater than the cycling time.
The other challenge is the injection energy. At injection, the dipole magnetic field is so low that the field quality is hardly reproducible from one cycle to another.
We present the status of the optics design of the HEB, and the impact of the magnetic field imperfections on the dynamic aperture at injection.
The high luminosity foreseen in the future electron-positron circular collider (FCC-ee) necessitates very intense multi-bunch colliding beams with very small transverse beam sizes at the collision points. This requires emittances comparable to those of the modern synchrotron light sources. At the same time, the stored beam currents should be close to the best values achieved in the last generation of particle factories. This combination of opposite factors represents a big challenge in order to preserve a high beam quality avoiding, at the same time, a machine performance degradation. As a consequence, a careful study of the collective effects and some solutions for the mitigation of foreseen instabilities is required. In this contribution we discuss the current status of these studies.
The LiteBIRD satellite (Lite satellite for the study of B-mode polarization and Inflation from cosmic background Radiation Detection) will perform the final measurement of the Cosmic Microwave Background polarization anisotropies on large and intermediate angular scales. Its sensitivity and the wide frequency coverage in 15 bands will allow an unprecedented accuracy in the measurement and foreground cleaning of the signal in B-mode polarization and a cosmic variance limited measurement of the E-mode polarization. Such measurements will have deep implications for cosmology and fundamental physics. The determination of the energy scale of inflation and the constraints on its dynamics from the B-mode polarization will shed light on one of the most important phases of the Universe history and the fundamental physics it implies. LiteBIRD measurements will deepen our knowledge of reionization allowing to reduce the largest uncertainty in standard cosmology after-Planck and will allow to explore some of the main targets of cosmology as large scale anomalies, parity violating phenomena as the cosmic birefringence, the magnetism in the early Universe, etc. I will describe the LiteBIRD mission and detail its expected scientific outcomes.
We discuss the imprints of a cosmological redshift-dependent pseudoscalar field on the rotation of Cosmic Microwave Background (CMB) linear polarization generated by a coupling $ g_\phi \phi F^{\mu\nu} \tilde F_{\mu \nu}$.
We show how either phenomenological or theoretically motivated redshift dependence of the pseudoscalar field, such as those in models of Early Dark Energy, Quintessence or axion-like dark matter, lead to CMB polarization and temperature-polarization power spectra which exhibit a multipole dependence which goes beyond the widely adopted approximation in which the redshift dependence of the linear polarization angle is neglected. Because of this multipole dependence, the isotropic birefringence effect due to a general coupling $\phi F^{\mu\nu} \tilde F_{\mu \nu}$ is not degenerate with a polarization rotation angle independent on the multipoles, which could be instead connected to a systematic miscalibration angle. By taking the multipole dependence into account, we calculate the parameters of these phenomenological and theoretical redshift dependence of the pseudoscalar field which can be detected by future CMB polarization experiments on the basis of a $\chi^2$ analysis for a Wishart likelihood.
As a final example of our approach, we compute by MCMC the minimal coupling $g_\phi$ in Early Dark Energy which could be detected by future experiments with or without marginalizing on a constant rotation angle.
Parity-violating extensions of Maxwell electromagnetism induce a rotation of the linear polarization plane of photons during propagation. This effect, known as cosmic birefringence, impacts on the Cosmic Microwave Background (CMB) observations producing a mixing of $E$ and $B$ polarization modes which is otherwise null in the standard scenario. Such an effect is naturally parametrized by a rotation angle which can be written as the sum of an isotropic component $\alpha_0$ and an anisotropic one $\delta\alpha(\hat{\mathbf{n}})$. We have computed angular power spectra and bispectra involving $\delta\alpha$ and the CMB temperature and polarization maps. In particular, contrarily to what happens for the cross-spectra, we have shown that even in absence of primordial cross-correlations between the anisotropic birefringence angle and the CMB maps, there exist non-vanishing three-point correlation functions carrying signatures of parity-breaking physics. Furthermore, we find that such angular bispectra still survive in a regime of purely anisotropic cosmic birefringence. These bispectra represent an additional observable aimed at studying cosmic birefringence and its parity-violating nature beyond power spectrum analyses. Moreover, we have estimated that among all the possible birefringent bispectra, $\langle\delta\alpha\, TB\rangle$ and $\langle\delta\alpha\,EB\rangle$ are the ones which contain the largest signal-to-noise ratio. Once the cosmic birefringence signal is taken to be at the level of current constraints, we show that these bispectra are within reach of future CMB experiments, as LiteBIRD.
In this talk, I will present a Neural Network-improved version of DarkHistory, a code package that self-consistently computes the early universe temperature, ionization levels, and photon spectral distortion due to exotic energy injections. We use simple multilayer perceptron networks to store and interpolate complicated photon and electron transfer functions, previously stored as large tables. This improvement allows DarkHistory to run on small computers without heavy memory and storage usage while preserving the physical predictions to high accuracy. It also enables us to explore adding more parametric dependence in the future to include additional physical processes and spatial resolution.
We study the production of relativistic relics, also known as dark radiation, in the early Universe and precisely compute their current contribution to the extra number of effective neutrinos. One of the dark radiation candidates is the QCD axion produced from the primordial bath in the early universe. We consider KSVZ and DFSZ axion models and investigate the axion production at different scales. The dark radiation from QCD axion leaves an imprint on the observed cosmic microwave background that can be measured by the CMB-S4 experiment.
Electric charge quantization is a long-standing question in particle physics. While fractionally charged particles (millicharged particles hereafter) have typically been thought to preclude the possibility of Grand Unified Theories (GUTs), well-motivated dark-sector models have been proposed to predict the existence of millicharged particles while preserving the possibility for unification. Such models can contain a rich internal structure, providing candidate particles for dark matter. A number of experiments have searched for millicharged particles ($\chi$s), but in the parameter space of the charge ($Q$) and mass ($m_\chi$), the region of $m_\chi > 0.1$ GeV/$\rm{c}^2$ and $Q < 10^{โ3}e$ is largely unexplored.
SUB-Millicharge ExperimenT (SUBMET) has been proposed to search for sub-
millicharged particles using 30 GeV proton fixed-target collisions at J-PARC. The detector is composed of two layers of stacked scintillator bars and PMTs, and is proposed to be installed 280 m from the target. The main background is expected to be a random coincidence between the two layers due to dark counts in PMTs, which can be reduced significantly using the timing of the proton beam. With $\rm{N}_\rm{POT} = 5 ร 10^{21}$, the experiment provides sensitivity to ฯs with the charge down to $7\times10^{โ5}e$ in $m_\chi < 0.2$ GeV/$\rm{c}^2$ and $10^{-3}e$ in $m_\chi <
1.6$ GeV/$\rm{c}^2$. This is the regime largely uncovered by the previous experiments.
The Heavy Photon Search (HPS) experiment was conceived to search for a light new vector boson Aโ that is kinetically mixed with the photon and has a kinetic mixing parameter $ฮต^2 > 10^{-10}$. A vector boson with a mass in the 20-220 MeV/c$^2$ range could also mediate interactions between the Standard Model and light thermal dark matter. HPS searches for visible signatures of heavy photons in electroproduction reactions induced on a fixed Tungsten target exploiting the electron beam provided by the JLAB CEBAF machine, which can reach a maximum energy of 12 GeV. These studies of the low mass region complement the exploration of weakly coupled (and possibly new) physics presently performed at the LHC and other high-energy machines.
The HPS search is based on a two-fold approach. First, due to their small coupling to the electric charge, heavy photons should be produced in bremsstrahlung-like processes and could therefore be observed by HPS in their e+e- decay channel, over a huge QED background. Second, HPS can also perform precise decay lengths measurements, which provide information on long-lived bosons featuring small couplings.
After the completion of two engineering runs in 2015 and 2016, HPS is currently in full steam, with the analysis of the datasets collected in 2016, 2019 and 2021 presently ongoing.
In this talk, an overview of the results achieved so far by HPS will be presented.
Today the investigation of dark matter nature, its origin, and the way it interacts with ordinary matter plays a crucial role in fundamental science. Several particle physics experiments at accelerators are searching for hidden particles signals to contribute setting more stringent limits on the characteristics of dark matter.
The Positron Annihilation into Dark Matter Experiment (PADME), ongoing at the Laboratori Nazionali di Frascati of INFN, is looking for hidden particle signals by studying the missing-mass spectrum of single photon final states resulting from positrons annihilation on the electrons of a fixed target. PADME is expected to reach a sensitivity of up to 10$^{-6}$ on $\epsilon^2$ (kinetic mixing coefficient) representing the coupling of a low-mass dark photon (m< 23.7MeV) with ordinary photons.
By measuring the cross-section of the process e$^+$ e$^-$$\rightarrow \gamma \gamma$ at โs=21 MeV and comparing it with SM expectation, it is also possible to set limits on hidden particles decays to photon pairs. In this talk details on the PADME measurement of two-photon annihilation cross-section will be illustrated with its implication to the dark matter studies.
We report on the search visible decays of exotic mediators from data taken in "beam-dump" mode with the NA62 experiment.
The NA62 experiment can be run as a "beam-dump experiment" by removing the Kaon production target and moving the upstream collimators into a "closed" position.
In 2021, more than 10^17 protons on target have been collected in this way during a week-long data-taking campaign by the NA62 experiment.
Using past experience, the upstream beam-line magnets were configured to sizeably reduce background induced by 'halo' muons.
We report on the analysis results of this data, with a particular emphasis on Dark Photon Models.
The search for Dark Matter (DM) is one of the hottest topics of modern physics. Despite the various astrophysical and cosmological observations proving its existence, its elementary properties remain to date unknown. In addition to gravity, DM could interact with ordinary matter through a new force, mediated by a new vector boson (Dark Photon, Heavy Photon or A'), kinetically mixed with the Standard Model photon. The NA64 experiment at CERN fits in this scenario, aiming to produce DM particles using the 100 GeV SPS electron beam impinging on a thick active target (electromagnetic calorimeter). In this setup the DM production signature consists in a large observed missing energy, defined as the difference between the energy of the incoming electron and the energy measured in the calorimeter, coupled with null activity in the downstream veto systems. Recently, following the growing interest in positron annihilation mechanisms for DM production, the the NA64 collaboration has performed preliminary studies with the aim to run the experiment with a positron beam, as planned within the POKER (POsitron resonant annihilation into darK mattER) project.
This talk will present the latest NA64 results and its future prospects, reporting on the progresses in the positron beam run and discussing the sensitivity of the experiment to DM models alternative to the Dark Photon.
BESIII has collected 2.5 billion $\psi(2S)$ events and 10 billion $J/\psi$ events. The huge data
sample provide an excellent chance to search for new physics. We report the search
for the decay $J/\psi\to\gamma + invisible$, which is predicted by next-to-minimal
supersymmetric model. We also report the first search for the invisible decay of
$\Lambda$, which is predicted by the mirror matter model and could explain the
$4\sigma$ discrepancy in neutron lifetime measurement between beam method and bottle
method. A light Higgs $A^0$ is also searched in radiative decay of $J/\psi$
Hidden particles can help explain many important hints for new physics, but the large variety of viable hidden sector models poses a challenge for the model-independent interpretation of hidden particle searches. We present techniques published in 2105.06477 and 2203.02229 that can be used to compute model-independent rates for hidden sector induced transitions. Adapting an effective field theory (EFT) approach, we develop a framework for constructing portal effective theories (PETs) that couple standard model (SM) fields to generic hidden particles. We also propose a method to streamline the computation of hidden particle production rates by factorizing them into i) a model-independent SM contribution, and ii) a observable-independent hidden sector contribution. Showcasing these techniques, we compute a model-independent transition rate for charged kaon decays into a charged lepton and an arbitrary number of hidden particles. By factorizing the rate, a single form factor is found to parametrize the impact of general hidden sectors. This is used to re-interpret an existing search for HNLs in NA62 data, which yields model-independent constraints on the rate of producing arbitrary hidden particles.
The nature of Dark Matter (DM) is one of the greatest puzzles of modern particle physics and cosmology. Dark Matter characterisation requires systematic and consistent approach for DM theory space. We propose a first complete classification of minimal consistent Dark Matter models, which provides the missing link between experiments and top-down models. Consistency is achieved by imposing renormalisability and invariance under the full Standard Model symmetries. We apply this paradigm to fermionic Dark multiplets with up to one mediator. Our work highlights the presence of unexplored viable models, and paves the way for the ultimate systematic hunt for the Dark Matter particle. Based on e-Print: 2203.03660
Dark sectors containing light vectors or scalars may feature sizeable self-interactions between dark matter (DM) particles and are therefore of high phenomenological interest. Self-interacting dark matter appears to reproduce the observed galactic structure better than collisionless DM and may offer a dynamical explanation for the scaling relations governing galactic halos all the way up to clusters of galaxies. On top of being desirable from the phenomenological and observational points of views, the possibility of a richer dark sector, that comprises more than one particle, is fairly common in many DM models.
Furthermore, the existence of light, i.e. with masses much smaller than that of the actual DM particles, mediators may affect the DM dynamics in multiple ways.
Most notably, whenever DM particles are slowly moving with non-relativistic velocities, light mediators can induce bound states in the dark sector in the early universe and/or in the dense environment of present-day haloes. As for the above-threshold states, the effect of repeated mediator exchange manifests itself in the so-called Sommerfeld enhancement for an attractive potential.
In this talk we review state-of-art effective field theories techniques, both at zero and at finite temperature, that allows for the determination of rates that are crucial for an accurate determination of the DM energy density: bound-state formation and dissociation, pair annihilations and bound-state decays. Depending on the model, bound-state effects can lead to a substantial effect and rather different combinations of DM masses and couplings are found to reproduce the observed energy density. This calls for a reassessment of DM phenomenology due to the interplay between the model parameters that fix the relic density and guide the experimental strategies.
We address and discuss various DM models, that comprise the case of QCD colored co-annihilating partners, fermionic and scalar DM with self-interactions mediated by different mediators (scalar, pseudoscalar, vector and axial vector). We explore and report on the present reach of complementary experimental searches, including the LHC and XENON, and future prospects for the DARWIN experiment and Cherenkov Telescope Array (CTA).
Based on the example of the currently widely studied t-channel simplified model with a colored mediator, I will demonstrate the importance of considering non-perturbative effects such as the Sommerfeld effect and bound state formation for accurately predicting the relic abundance and hence correctly inferring the viable model parameters. For instance, I will highlight that the parameter space thought to be excluded by direct detection experiments and LHC searches remains still viable and illustrate that long-lived particle searches and bound-state searches at the LHC can play a crucial role in probing such a model. Finally, I will demonstrate how future direct detection experiments will be able to close almost all of the remaining windows for freeze-out production, making it a highly testable scenario.
New ''dark'' fermionic fields charged under a confining dark group ($\text{SU}(N)$ or $\text{SO}(N)$) can come as embeddings in SU(5) multiplets to explain dark matter (DM). These fermions would form bound states due to the confining nature of the dark gauge group. Such dark baryons could prove to be a good neutral DM candidate stable due to a dark baryon number. DM relic abundance sets the dark confinement scale to be of $\mathcal{O} (100) \text{TeV}$. Previous works require the mass of these baryonic DM-forming light fields $m$, (where $m$ is less than the dark confinement scale $\Lambda_{\text{DC}}$) to be way smaller than the unification scale (GUT scale). This was done assuming that their GUT partners in $\text{SU}(5)$ representations have GUT scale mass. In our work, focusing on the role of these heavy GUT states in these models, we find that the dark fermions cannot come in almost degenerate GUT multiplets.
We further find that cosmological constraints from Big Bang Nucleosynthesis in addition to unification requirements allow for only certain values of masses for these GUT fermions.
However, these mass values give a too large contribution to the DM relic abundance.
To evade this, the mass of the GUT states must be lower than the reheating temperature.
In general, we find that the heavy dark GUT states impact both the cosmological evolution and grand unification. Our study clarifies under which conditions both aspects of the theory are realistic.
We examine the dynamics of quarks and gauge fields in QCD and QED
interactions in the lowest energy states with approximate cylindrical
symmetry, in a flux tube model. Using the action integral, we
separate out the (3+1)D in terms of the transverse and the
longitudinal degrees of freedom and solve the resultant equations of
motion. We find that there may be localized and stable states of QCD
and QED collective $q \bar q$ excitations, showing up as particles whose masses
depend on the QCD and QED coupling constants and the flux-tube radius [1].
Along with known standard stable collective QCD excitations of the
quark-QCD-QED system, there may be stable QED collective $q\bar q$ excitations, which can be
good candidates for the X17 particle [2], the E38 particle [3], and anomalous soft photons [4,5] observed recently in
the region of many tens of MeV, as dicussed in [6].
\vspace*{1.8cm}
\large
[1] A. Koshelkin and C. Y. Wong, {\it
Dynamics of quarks and gauge fields in the lowest-energy states in QCD and QED}, arxiv:2111.14933.
\vspace*{0.3cm}
[2] A. J. Krasznahorkay $et~al.$, {\it Observation of anomalous
internal pair creation in 8Be: a possible indication of a
light, neutral boson}, Phys. Rev. Lett. 116, 042501 (2016),
[arXiv:1504.01527].
\vspace*{0.3cm}
[3] K. Abraamyan, et.al, {\it Check of the structure in photon pairs
spectra at the invariant mass of about 38 MeV}, EPJ Web of
Conferences 204, 08004 (2019).
\vspace*{0.3cm}
[4] A. Belogianni $et~ al.$ (WA102 Collaboration), {\it Observation of a soft photon signal in excess of QED expectations in
$pp$ interactions}, Phys. Lett. B548, 129 (2002).
\vspace*{0.3cm}
[5] J. Abdallah $et~al.$ (DELPHI Collaboration), {\it
Evidence for an excess of soft photons in hadronic decays of Z$^0$},
Eur. Phys. J. C47, 273 (2006), [arXiv:hep-ex/0604038].
\vspace*{0.3cm}
[6]
C. Y. Wong, {\it Open string QED meson description of the X17
particle and dark matter}, JHEP 08 (2020) 165, [arxiv:2001.04864].
We suggest a new class ofย models โย Fermionic Portal Vector Dark Matter (FPVDM) which extends the Standard Model (SM) withย $SU(2)_D$ dark gauge sector. While FPVDM does not require kinetic mixing and Higgs portal,ย It is based on the Vector-Like (VL) fermionic doubletย which couples the dark sector with the SM sector through the Yukawa interaction. The FPVDM model provides a vector Dark Matter (DM) with $Z_2$ odd parity ensuring its stability. Multiple realisations are allowed depending on the VL partner and scalar potential. In this talk, we discuss an example of minimal FPVDM realisation with only a VL top partner andย no mixing between SM and new scalar sectors. We also present the model implications for DM direct and indirect detection experiments, relic density and collider searches.
The Standard Model effective field theory (SMEFT) is one of the preferred approaches for studying particle physics in the present scenario. The dimension-six SMEFT operators are the most relevant ones and have been studied in various works. The renormalization group evolution equations of these operators are available in the literature and facilitate examining the SMEFT on combined experimental information gathered across different energy scales. But, the dimension-six operators are not the dominant term for all observables, and some of these operators are loop-generated when UV theories are matched to the SMEFT. Also, considering that for relatively low values of the cut-off scale of the SMEFT, contributions from dimension-eight operators cannot be neglected.
In this work, we present the renormalization of the bosonic sector of the dimension-eight operators by tree-level generated dimension-eight operators in the matching of weakly coupled UV theories to the SMEFT. These operators appear in the positivity constraints, which determine the signs of certain combinations of Wilson coefficients based on the unitarity and analyticity of the S-matrix. These constraints are remarkably significant as any experimental evidence of a violation of these constraints would indicate the invalidity of the EFT approach, such as, for example, the existence of lighter degrees of freedom below the cut-off scale of the EFT. Also, these restrictions can be taken into account while defining priors on the fits aiming at constraining the SMEFT parameter space.
Due to large scale separations, matching is an essential and laborious computational step in the comparison of high-energy new physics models to experimental data.
Matchete is a Mathematica package that automates the one-loop matching from any generic ultraviolet (UV) model to a low-energy effective field theory (EFT) including, but not limited to, SMEFT. The program takes a UV Lagrangian as input, integrates out heavy degrees of freedom using functional methods, and returns the EFT Lagrangian. The output is further reduced to a minimal basis using Fierz identities, integration by parts, simplification of Dirac and group structures, and field redefinitions.
After reviewing the theory of functional matching, I will demonstrate the capabilities of the package with a concrete example.
I would like to present an intriguing new perspective into such fundamental questions as 1) the origin of the gauge interactions in the Standard Model (SM), and 2) the origin of the quark, lepton and neutrino families' replication and their fundamental properties experimentally observed in Nature. These questions can be addressed by tying together in a common framework both flavour physics and Grand Unification, which are typically treated on a different footing. Furthermore, I will elaborate on New Physics scenarios that are expected to emerge at phenomenologically relevant energy scales as sub-products of the Trinification-based Flavoured GUT that naturally explain neutrino masses and observed hierarchies in the fermion sectors of the SM as well as the emergence of observed flavour anomalies.
In this talk, we present the construction of Effective Field Theories (EFTs) in which a chiral fermion, charged under both gauge and global symmetries, is integrated out. These symmetries can be spontaneously broken, and the global ones might also be anomalous. This setting is typically served to study the structure of low-energy axion EFTs, where the anomalous global symmetry can be $U(1)_{PQ}$ and the local symmetries can be the SM electroweak chiral gauge symmetries. Spontaneous symmetry breaking will generate Goldstone bosons, and in the meantime, chiral fermions also become massive. In this setup, we emphasise that the derivative couplings of the Goldstone bosons to fermion will lead to severe divergences and ambiguities when evaluating one-loop computations.
In this talk, firstly, we present the Path Integral formalism for building the EFTs resulting from integrating-out massive chiral fermions. Secondly, within this functional formalism, we show how to solve the ambiguities problem by adapting the anomalous Ward identities to the EFT context, and thus enforcing the gauge invariance results. Our methodology provides a generic and consistent neat result when evaluating the Wilson coefficients of EFT operators involving axion and gauge bosons. Finally, we present the application of our technique to axion models and compute non-intuitive couplings between axion and the massive SM gauge fields that arise when decoupling massive chiral fermions.
References: (arXiv: 2112.00553)
Link: https://inspirehep.net/literature/1981947
We present a novel benchmark application of a quantum algorithm to Feynman loop integrals. The two on-shell states of a Feynman propagator are identified with the two states of a qubit and a quantum algorithm is used to unfold the causal singular configurations of multiloop Feynman diagrams. To identify such configurations, we exploit Grover's algorithm for querying multiple solutions over unstructured datasets, introducing a suitable modification to deal with topologies in which the number of causal states to be identified is nearly half of the total number of states. The output of the quantum algorithm in IBM Quantum and QUTE Testbed simulators is used to bootstrap the causal representation in the loop-tree duality of representative multiloop topologies.
We consider Nielsen-Olesen vortices (abelian Higgs model in $2+1$ dimensions) under Einstein gravity in an AdS$_3$ background. We find numerically non-singular solutions characterized by three parameters: the cosmological constant $\Lambda$, the winding number $n$ and the vacuum expectation value (VEV) labeled by $v$. The mass (ADM mass) of the vortex is expressed in two ways: one involves subtracting the value of two metrics asymptotically and the other is expressed as an integral over matter fields. The latter shows that the mass has an approximately $n^2 v^2$ dependence and our numerical results corroborate this. We observe that as the magnitude of the cosmological constant increases the core of the vortex becomes slightly smaller and the mass increases. We then embed the vortex under gravity in a Minkowski background and obtain numerical solutions for different values of Newton's constant. There is a smooth transition from the non-singular origin to an asymptotic conical spacetime with angular deficit that increases as Newton's constant increases. We end by stating that the well-known logarithmic divergence in the energy of the vortex in the absence of gauge fields can be seen in a new light with gravity: it shows up in the metric as a $2+1$ Newtonian logarithmic potential leading to a divergent ADM mass.
In heavy ion collisions, the quark gluon plasma, a new state of matter where quarks and gluons are no longer confined in a nucleus, is created. High energy partons created during the initial collision are observed to lose energy though interactions with the plasma. The details of how the energy is transported away from the partons is not fully understood and of great interest. Jet spectra measurement with different resolution parameters is one of the simplest observables and yet provides highly nontrivial insight. In this talk, we report new results on the inclusive jet spectra from CMS with the latest high statistics data, including the results on anti-kT jet spectra spanning the widest range of resolution parameters ever employed in heavy-ion collisions. The accuracy of the result is greatly improved compared to the previous publication on large area jets up to R = 1.0. These results shed light on the different mechanisms of parton interactions with the medium.
Several new features have been recently observed in high-multiplicity small collision systems that are reminiscent of the observations attributed to the creation of a quark-gluon plasma, QGP, in Pb-Pb collisions. These include long-range angular correlations on the near and away side of two-particle correlations, non-vanishing second order Fourier coefficients in multiparticle cumulant studies, and the baryon-to-meson ratio enhancement in high-multiplicity pp and p-Pb collisions. However, jet quenching effects in small systems have not yet been observed, and quantifying or setting limits on the magnitude of jet quenching in small systems is a key element in understanding the limits of QGP formation. In this talk we present a search for jet quenching effects in pp collisions as function of event multiplicity based on on two jet observables: inclusive $p_\mathrm{T}$-differential jet cross sections, and the semi-inclusive yield of jets recoiling from a high-$p_\mathrm{T}$ hadron. Both measurements are carried out differentially in event multiplicity, which varies the size of the collision system. Jets are reconstructed from charged particles using the anti-$k_\mathrm{T}$ algorithm, the $R$-dependent inclusive jet cross section is compared to pQCD calculations. To search for jet quenching effects, the shape of the inclusive jet yield in different multiplicity intervals is compared to the one obtained in minimum bias (MB) events. The jet yield increases as a function of charged-particle multiplicity, which is similar to the one observed from soft sectors based on transverse spherocity. In the semi-inclusive analysis, the recoil jet acoplanarity distributions are measured in high multiplicity (HM) events and MB events. The acoplanarity distributions in HM events exhibit a marked suppression and broadening when compared to the corresponding distributions obtained from MB events. Its origin is elucidated by comparison to model calculations, with potential implications for the larger LHC small-systems program.
In this work, we introduce both gluon and quark degrees of freedom for describing the partonic cascades inside the medium. We present numerical solutions for the set of coupled evolution equations with splitting kernels calculated for the static, exponential and Bjorken expanding media to arrive at medium-modified parton spectra for quark and gluon initiated jets respectively. We discuss novel scaling features of the partonic spectra between different types of media. Next, we study the inclusive jet $๐ _{๐ด๐ด}$ by including phenomenologically driven combinations of quark and gluon fractions inside a jet. In addition, we have also studied the effect of the nPDF as well as vacuum like emissions on the jet $๐ _{๐ด๐ด}$. Differences among the estimated values of quenching parameter for different types of medium expansions are noted. Next, the impact of the expansion of the medium on the rapidity dependence of the jet $๐ _{๐ด๐ด}$ as well as jet $v_2$ are studied in detail. Finally, we present qualitative results comparing the sensitivity of the time for the onset of the quenching for the Bjorken profile on these observables. All the quantities calculated are compared with the recent ATLAS data.
The sPHENIX detector at the BNL Relativistic Heavy Ion Collider (RHIC) is currently under construction and on schedule for first data in early 2023. Built around the BaBar superconducting solenoid, the central detector consists of a silicon pixel vertexer, a silicon strip detector with single event timing resolution, a compact TPC, novel EM calorimetry, and two layers of hadronic calorimetry. The plan is to use the combination of electromagnetic calorimetry, hermetic hadronic calorimetry, precision tracking, and the ability to record data at high rates without trigger bias to make precision measurements of Heavy Flavor, Upsilon and jets to probe of the Quark Gluon Plasma (QGP) formed in heavy-ion collisions. These measurements will have a kinematic reach that not only overlaps those performed at the LHC, but extends them into a new, low-pT regime. sPHENIX will significantly expand the observables and kinematic reach of these measurements at RHIC and provide a comparison with the LHC measurements in the overlapping kinematic region. The physics program, its potential impact, and recent detector development will be discussed in this talk.
The LHeC and the FCC-he will measure DIS cross sections and the partonic structure of protons and nuclei in an unprecedented range of small $๐ฅ$. In this kinematic region the non-linear dynamics expected in the high energy regime of QCD should be relevant in a region of small coupling. In this talk we will demonstrate the unique capability of these high-energy colliders for unravelling dynamics beyond fixed-order perturbation theory, proving the non-linear regime of QCD, saturation, to exist (or to disprove). This is enabled through the simultaneous measurements, of similar high precision and range, of $๐๐$ and $e$A collisions which will eventually disentangle nonlinear parton-parton interactions from nuclear environment effects.
Reference: P. Agostini et al. (LHeC Study Group), The Large Hadron-Electron Collider at the HL-LHC, J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
Automated perturbative computations of cross sections for hard processes in asymmetric hadronic/nuclear $A+B$ collisions at the next-to-leading (NLO) order in $\alpha_s$ will offer a wide range of applications, such as more robust predictions for new experimental programs, the phenomenology of heavy-ion collisions, and the interpretation of the LHC and RHIC data. Such a goal can be achieved using MadGraph5_aMC@NLO [1], a well-established tool for automatic generation of matrix elements and event generation for high energy physics processes in elementary collisions, such as decays and ${2\rightarrow n}$ scatterings.
We have extended the capabilities of MadGraph5_aMC@NLO capabilities by implementing computations for asymmetric collisions, for example $p+Pb$, $\pi+Al$ or $Pb+W$ reactions. These new capabilities will soon be made available via the EU Virtual Access NLOAccess (https://nloaccess.in2p3.fr).
In my talk, I will present the objectives of the NLOAccess initiative, the implementation of asymmetric computation computations in MadGraph5_aMC@NLO along with the computation of the nuclear PDF and scale uncertainties, our cross checks with previous results and codes (e.g. Helac-Onia [2], FEWZ [3,4]), and predictions for $p+Pb$ collisions at the LHC for charm, bottom and top quark production, as well as fancier observables now made predictable with these new capabilities.
References:
[1] J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, โThe automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations,โ JHEP 07 (2014) 079, arXiv:1405.0301 [hep-ph].
[2] H.-S. Shao, HELAC-Onia 2.0: an upgraded matrix-element and event generator for heavy quarkonium physics, Comput. Phys. Commun. 198, 238 (2016),
doi:10.1016/j.cpc.2015.09.011, 1507.03435.
[3] R. Gavin, Y. Li, F. Petriello and S. Quackenbush, FEWZ 2.0: A code for hadronic Z production at next-to-next-to-leading order, Comput. Phys. Commun. 182 (2011) 2388 [1011.3540].
[4] Seth Quackenbush, Ryan Gavin, Ye Li, Frank Petriello, W physics at the LHC with FEWZ 2.1, Computer Physics Communications, Volume 184, Issue 1, 2013, Pages 209-214, ISSN 0010-4655, https://doi.org/10.1016/j.cpc.2012.09.005.
We discuss measurements of the CP properties of the Higgs boson with the CMS detector, both exploiting Higgs boson production and its decay, as well as searches for non-standard-model CP contributions, and anomalous couplings in general.
Studies of the CP properties of the Higgs boson in various production modes and decay channels are presented. Limits on the mixing of CP-even and CP-odd Higgs states are set by exploiting the properties of diverse final states.
This talk presents the most recent measurements of Higgs boson mass and width by the ATLAS experiment exploiting the Higgs boson decays into two photons or four leptons, and using the full Run 2 dataset of pp collisions collected at 13 TeV at the LHC.
With the data collected in Run-2, the Higgs boson can be studied in several production processes using a wide range of decay modes. Combining data in these different channels provides a broad picture of the Higgs boson coupling strengths to SM particles. This talk will cover the latest combination of Higgs boson production and decay modes at CMS to measure the Higgs boson couplings.
With the full Run 2 pp collision dataset collected at 13 TeV, very detailed measurements of Higgs boson coupling properties can be performed using a variety of final states, identifying several production modes and its decays into bosons and fermions and probing different regions of the phase space with increasing precision. These measurements can then be combined to exploit the strengths of each channel, thus proving the most stringent global measurement of the Higgs coupling properties. This talk presents the latest combination of Higgs boson coupling measurements by the ATLAS experiment, discussing results in term of production modes, branching fractions and Simplified Template Cross Sections, as well as their interpretations in the framework of kappa modifiers to the strength of the various coupling and decay properties.
With the full Run 2 pp collision dataset collected at 13 TeV by the ATLAS experiment, it is now possible to perform detailed measurements of Higgs boson properties in many production and decay modes. In many cases, novel experimental techniques were developed to allow for these measurements. This talk presents a review of a representative selection of such novel techniques, including: embedding of simulated objects in data; special object weighting techniques to maximize statistical precision; developing special trigger, reconstruction, and identification algorithms for non-standard objects; special treatments of sources of two-point theory systematic uncertainties; special developments in likelihood-based fitting techniques; various innovative machine-learning approaches.
We assess the performance of different jet-clustering algorithms, in the presence of different resolution parameters and reconstruction procedures, in resolving fully hadronic final states emerging from the chain decay of the discovered Higgs boson into pairs of new identical Higgs states, the latter in turn decaying into bottom-antibottom quark pairs. We show that, at the Large Hadron Collider (LHC), both the efficiency of selecting the multi-jet final state and the ability to reconstruct from it the masses of the Higgs bosons (potentially) present in an event sample depend strongly on the choice of acceptance cuts, jet-clustering algorithm as well as its settings. Hence, we indicate the optimal choice of the latter for the purpose of establishing such a benchmark Beyond the SM (BSM) signal. We then repeat the exercise for a heavy Higgs boson cascading into two SM-like Higgs states, obtaining similar results.
Three mysteries stand after the discovery of the Higgs boson: (i) the origin of the masses of the neutrinos; (ii) the origin of the baryon asymmetry in the universe; and (iii) the nature of dark matter. The FCC-ee provides an exciting opportunity to solve these mysteries with the discovery of heavy neutral leptons (HNLs, or N), in particular using the large sample ($5\cdot 10^{12}$) Z bosons produced in early running at the Z resonance using the production process e+e- โ Z โ vN. The expected very small mixing between light and heavy neutrinos leads to very small mixing angles, resulting in very long lifetimes for the HNL and in spectacular signal topology. Although the final state in this reaction appears to be charge-insensitive, it is nevertheless possible to distinguish the Dirac vs Majorana nature of the neutrinos, by a variety of methods that will be discussed. A Majorana nature could have considerable implication for the generation of the Baryon Asymmetry of the Universe.
Accelerator-based neutrino experiments require precise understanding of their neutrino flux, which originates from meson decays in flight. These mesons are produced in hadron-nucleus interactions in extended targets. The cross-sections of the primary and secondary hadronic processes involved are generally poorly measured, and as a result hadron production is the leading systematic uncertainty source on neutrino flux prediction at all major experimental neutrino facilities. The NA61/SHINE multi-particle spectrometer at the CERN SPS has a dedicated program to make precise measurements of hadron production processes for neutrino beams, and has taken data on processes important for both T2K and the Fermilab long-baseline neutrino program. This talk will present the newest measurements of hadron production cross-sections at multiple energies and targets, as well as more specialized measurements using replicas of neutrino beam production targets. NA61/SHINE is completing a major detector upgrade, and physics measurements will begin in June 2022; over the next four years NA61/SHINE will perform a new program of measurements dedicated to neutrino physics including the production of mesons from a replica of the LBNF/DUNE target. Finally, a possible new low-energy beam facility for NA61/SHINE and its physics program will be discussed.
The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is a Gadolinium-loaded water Cherenkov detector located in the Booster Neutrino Beam at Fermilab. One of its primary physics goals is to measure the final state neutron multiplicity of neutrino-nucleus interactions. This measurement of the neutron yield as a function of the outgoing lepton kinematics will be useful to constrain systematic uncertainties and reduce biases in future long-baseline oscillation and cross-section experiments. ANNIE is also a testbed for innovative new detection technologies. It will make use of pioneering photodetectors called Large Area Picosecond Photodetectors (LAPPDs) with better than 100 picosecond time resolution, which will enhance its reconstruction capabilities and demonstrate the feasibility of this technology as a new tool in high energy physics. This talk will present the status of the experiment in terms of the overall progress, the deployment of the first LAPPD and an overview of recently taken beam and calibration data. Additional future R&D efforts and analysis opportunities involving the use of the novel detection medium of Water-based Liquid Scintillators will be briefly highlighted.
The main source of systematic uncertainty on neutrino cross section measurements at the GeV scale originates from the poor knowledge of the initial flux. The goal of cutting down this uncertainty to 1% can be achieved through the monitoring of charged leptons produced in association with neutrinos, by properly instrumenting the decay region of a conventional narrow-band neutrino beam. Large angle muons and positrons from kaons are measured by a sampling calorimeter on the decay tunnel walls (tagger), while muon stations after the hadron dump can be used to monitor the neutrino component from pion decays. This instrumentation can provide a full control on both the muon and electron neutrino fluxes at all energies. Furthermore, the narrow momentum width (<10%) of the beam provides a $\mathcal{0}$(10%) measurement of the neutrino energy on an event by event basis, thanks to its correlation with the radial position of the interaction at the neutrino detector. The ENUBET project has been funded by the ERC in 2016 to prove the feasibility of such a monitored neutrino beam and, since 2019, ENUBET is a CERN neutrino platform experiment (NP06/ENUBET).
ENUBET is going to present the final results of the ERC project in ICHEP, together with the complete assessment of the feasibility of its concept. The breakthrough the project achieved is the design of a horn-less beamline that allows for a 1% measurement of $\nu_e$ and $\nu_{\mu}$ cross sections in about 3 years of data taking at CERN-SPS using ProtoDUNE as far detector. Thanks to the replacement of the horn with a static focusing system (2 s proton extraction) we reduce the pile up by two orders of magnitude, and we can monitor positrons from kaons plus muons from pion and kaon decays with a signal/background >2.
A full Geant4 simulation of the facility is employed to assess the final systematics budget on the neutrino fluxes with an extended likelihood fit of a model where the hadro-production, beamline geometry and detector-related uncertainties are parametrized by nuisance parameters. In parallel the collaboration is building a section of the decay tunnel instrumentation ("demonstrator", 1.65m in length, 7 ton mass) that will be exposed to the T9 particle beam at CERN-PS in autumn 2022, for a final validation of the detector performance.
The ENUBET design is such that the same sensitivity can be achieved using the proton accelerators available at FNAL using ICARUS as neutrino detector. The technology of a monitored neutrino beam has been proven to be feasible and cost-effective (the instrumentation contributes to about 10% of the cost of the conventional neutrino beam), and the complexity does not exceed significantly the one of standard short-baseline beams. ENUBET will thus play an important role in the systematic reduction programme of future long baseline experiments, thus enhancing the physics reach of DUNE and HyperKamiokande. In our contribution, we summarize the ENUBET design, physics performance and opportunities for its implementation in a timescale comparable with next long baseline neutrino experiments.
The DsTau experiment at CERN-SPS has been proposed to measure an inclusive differential cross-section of a Ds production with a consecutive decay to tau lepton in p-A interactions. A precise measurement of the tau neutrino cross section would enable a search for new physics effects such as testing the Lepton Universality (LU) of Standard Model in neutrino interactions. The detector is based on nuclear emulsion providing a sub-micron spatial resolution for the detection of short length and small โkinkโ decays. Therefore, it is very suitable to search for peculiar decay topologies (โdouble kinkโ) of DsโฯโX. In 2021, the first physics run of the experiment was performed successfully. The collected data corresponds to 30% of the aimed total statistics. In this presentation, the status of data taking and analysis will be presented.
The Deep Underground Neutrino Experiment (DUNE) is a next generation long baseline neutrino experiment for oscillation physics and proton decay studies. The primary physics goals of the DUNE experiment are to perform neutrino oscillation physics studies, search for proton decay, detect supernova burst neutrinos, make solar neutrino measurements and BSM searches. The liquid argon prototype detectors at CERN (ProtoDUNE) are a test-bed for DUNEs far detectors. It is a 700 ton liquid argon time projection chamber (LArTPC) that has operated for over 2 years, to inform the construction and operation of the first two and possibly subsequent 17-kt DUNE far detector LArTPC modules. Here we introduce the DUNE and protoDUNE experiments and physics goals as well as discussing recent progress and results.
DUNE will be a next-generation experiment aiming to provide precision measurements of the neutrino oscillation parameters. It will detect neutrinos generated in the LBNF beamline at Fermilab, using a Near Detector (ND) situated near the beam target where the neutrinos originate and a Far Detector (FD) located 1300 km away in South Dakota. A comparison of the spectra of neutrinos measured at the FD and the ND will allow for the extraction of oscillation probabilities from which the oscillation parameters can be inferred. The specific role of the ND will be to serve as the experimentโs control: it will establish the no oscillation null hypothesis, measure and monitor the beam, constrain systematic uncertainties, and provide essential measurements of the neutrino interactions to improve models. The ND complex will include three primary detector components: a liquid argon TPC called ND-LAr, a high-pressure gas TPC called ND-GAr and an on-axis beam monitor called SAND. The three detectors will serve important individual and overlapping functions, with ND-LAr and ND-GAr also able to move transverse to the beamโs axis via the DUNE-PRISM program. The overall mission of the ND, as well as the three sub-detectorsโ unique capabilities and physics programs will be discussed during this talk, including the Beyond Standard Model physics searches that can be undertaken with the detectors at the near site.
The CMS Collaboration is preparing to replace its endcap calorimeters for the HL-LHC era with a high-granularity calorimeter (HGCAL). The HGCAL will have fine segmentation in both the transverse and longitudinal directions, and will be the first such calorimeter specifically optimized for particle-flow reconstruction to operate at a colliding-beam experiment. The proposed design uses silicon sensors as active material in the regions of highest radiation and plastic scintillator tiles equipped with on-tile silicon photomultipliers (SiPMs), in the less-challenging regions. The unprecedented transverse and longitudinal segmentation facilitates particle identification, particle-flow reconstruction and pileup rejection. We will overview some of the novel reconstruction methods being explored. As part of the ongoing development and testing phase of the HGCAL, prototypes of both the silicon and scintillator-based calorimeter sections have been tested in 2018 in beams at CERN. We report on the performance of the prototype detectors in terms of stability of noise and pedestals, MIP calibration, longitudinal/lateral shower shapes, precision timing, as well as energy linearity and resolution for electrons and pions. We compare the measurements with a detailed GEANT4 simulation. We also report on beam tests of the scintillator-based section at DESY in 2020 and 2021.
A new era of hadron collisions will start around 2028 with the High-Luminosity LHC, that will allow to collect ten times more data that what has been collected so far at the LHC. This is possible thanks to a higher instantaneous luminosity and higher number of collisions per bunch crossing.
To meet the new trigger and data acquisition requirements and withstand the high expected radiation doses at the High-Luminosity LHC, the ATLAS Liquid Argon Calorimeter readout electronics will be upgraded. The triangular calorimeter signals are amplified and shaped by analogue electronics over a dynamic range of 16 bits, with low noise and excellent linearity. Developments of low-power preamplifiers and shapers to meet these requirements are ongoing in 130nm CMOS technology. In order to digitize the analogue signals on two gains after shaping, a radiation-hard, low-power 40 MHz 14-bit ADCs is being developed using a pipeline+SAR architecture in 65nm CMOS. The characterization of the prototypes of these on-detector components is promising and will likely fulfill all the requirements.
The signals will be sent at 40MHz to the off-detector electronics, where FPGAs connected through high-speed links will perform energy and time reconstruction through the application of corrections and digital filtering. Reduced data are then sent with low latency to the first-level trigger-system, while the full data are buffered until the reception of the trigger decision signal. If an event is triggered, the full data is sent to the ATLAS readout system. The data-processing, control, and timing functions will be realized with dedicated boards using the ATCA technology.
The results of tests of prototypes of the on-detector components will be presented. The design of the off-detector boards along with the performance of the first prototypes will be discussed. In addition, the architecture of the firmware and processing algorithms will be shown.
The High Luminosity upgrade of the LHC (HL-LHC) at CERN will provide unprecedented instantaneous and integrated luminosities of around 5 x 10^34 cm-2 s-1 and 3000/fb, respectively. An average of 140 to 200 collisions per bunch-crossing (pileup) is expected. In the barrel region of the Compact Muon Solenoid (CMS) electromagnetic calorimeter (ECAL), the lead tungstate crystals and avalanche photodiodes (APDs) will continue to perform well, while the entire readout and trigger electronics will be replaced. The noise increase in the APDs, due to radiation-induced dark current, will be mitigated by reducing the ECAL operating temperature. The trigger decision will be moved off-detector and performed by powerful and flexible FPGA processors.
The upgraded ECAL will greatly improve the time resolution for photons and electrons with energies above 10 GeV. Together with the introduction of a new timing detector designed to perform measurements with a resolution of a few tens of picoseconds for minimum ionizing particles, the CMS detector will be able to precisely reconstruct the primary interaction vertex under the described pileup conditions.
We present the status of the ECAL barrel upgrade, including time resolution results from beam tests conducted during 2018 and 2021 at the CERN SPS.
The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the central region of the ATLAS experiment. It is a sampling calorimeter with steel as absorber and scintillators as active medium. The scintillators are readout by wavelength shifting fibers coupled to photomultiplier tubes (PMTs). The TileCal response and its readout electronics are monitored to better than 1% using radioactive source, laser and charge injection systems.
Both the on- and off-detector TileCal electronics will undergo significant upgrades in preparation for the high luminosity phase of the LHC (HL-LHC) expected to begin in 2029 so that the system can cope with the HL-LHC increased radiation levels and out-of-time pileup and can meet the requirements of a 1 MHz trigger.
PMT signals from every TileCal cell will be digitized and sent directly to the back-end electronics, where the signals are reconstructed, stored, and sent to the first level of trigger at a rate of 40 MHz. This improved readout architecture allows more complex trigger algorithms to be developed.
The TileCal system design for the HL-LHC results from a long R&D program cross-validated by test beam studies and a demonstrator module. This module has reverse compatibility with the existing system and was inserted in ATLAS in August 2019 to test current detector conditions. The new design was tested with a beam of particles in 2021 at CERN SPS.
The main features of the TileCal upgrade program and results obtained from the Demonstrator tests and test beam campaigns will be discussed.
Within the upgrade program of the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) for the HL-LHC data taking, the installation of a new timing layer to measure the time of minimum ionizing particles (MIPs) with a time resolution of ~30-40 ps is planned. The time information of the tracks from this new MIP Timing Detector (MTD) will improve the rejection of spurious tracks and vertices arising from the expected harsh pile-up conditions from machine operation. At the same time this detector will provide particle identification capabilities based on the time-of-flight, and will bring unique physics opportunities for interesting signatures such as those including long-lived particles. An overview of these possibilities is given, using the state of the art of the simulation and reconstruction of the MTD detector.
The LHC luminosity will significantly increase in the coming years. Many of the current detectors in different subsystems need to be replaced or upgraded. The new ones should be capable not only to cope with the high particle rate, but also to provide improved time information to reduce the data ambiguity due to the expected high pileup. The CMS collaboration have shown that the new improved RPCs, using smaller gas gap (1.4 mm) and low-resistivity High Pressure Laminate, can stand rates up to 2 kHz/cm2. They are equipped with new electronics sensitive to low signal charges. This electronics was developed to read out the RPC detectors from both sides of a strip and, using timing information, to identify the position along it. The excellent relative resolution of ~200 ps leads to a space resolution of few cm. The absolute time measurement, determined by RPC signal around 500 ps, will also reduce the data ambiguity due to the highly expected pileup at the Level 1 trigger. 4 demonstrator chambers have just been installed in the CMS cavern. These chambers were qualified in test beams at Gamma Irradiation Facility (GIF), located on one of the SPS beam lines at CERN. This talk will present the results of the tests done in GIF, as well as the brand new results from commissioning at CMS.
The study of CP violation patterns across the phase space of multibody charmless B decays is of great interest as it brings information on the dynamics of residual strong interactions between quarks in the initial and final states of the decay. Understanding this dynamics is fundamental to distinguish between QCD effects and potential contributions from physics beyond the standard model. In this work we present the most recent measurements of CP violation in multibody charmless B decays at LHCb.
Measurements of decay-time dependent CP violation are chief goals of the Belle II physics program. Comparison between penguin-dominated $b\to q\bar{q}s$ and tree-dominated $b\to c\bar{c}s$ results allows for stringent tests of CKM unitarity that are sensitive to non-SM physics. This talk present first Belle II results on the mixing rate and lifetime of $B^0$ mesons, an essential validation of time-dependent measurements that requires detailed control of complex high-level capabilities such as flavor tagging and decay-time resolution modeling. Recent results on $B^0\to K^0_S\pi^0\gamma$ and $B^0\to K_S^0K_S^0K_S^0$ are also reported.
Outstanding vertexing performance and low-background environment are key enablers of a systematic Belle II program targeted at measurements of charm hadron lifetimes Recent results from measurements of $D^0$ meson, $D^+$ meson and $\Lambda_c$ baryon lifetimes are presented. The results are the most precise to date.
BESIII has collected 2.93 and 6.32 $fb^-1$ of $e^+e^-$ collision data samples at 3.773 and
4.178-4.226 GeV, respectively. We will report precision measurements of $f_{Ds}$, $|V_{cs}|$
and test of lepton flavor universality by studying the leptonic decays of $D_s -> l^+ \nu$
with $\tau^+ -> \rho^+ \nu, \pi^+ \nu$, and $e^+ \nu \nu$. We will also report the observation of
semileptonic decay of $D^0 -> \rho^-\mu^+\nu$ and lepton flavor universality test, and the
studies of some other semileptonic decays, such as $D_s -> \pi^0\pi^0e^+\nu$ and $K_SK_Se^+\nu$.
BESIII has collected 4.4 $fb^{-1}$ of $e^+e^-$ collision data between 4.6 and 4.7 GeV. This unique data offers ideal oppurtunity to determine absolute branching fractions of
$\Lambda_c^+$ decays. We will report the first observation of $\Lambda_c^+ -> n \pi^+$.
Meanwhile, we will report prospect on the studies of semileptonic and the other
hadronic decays of $\Lambda_c^+$ in the near future.
The Cabibbo-Kobayashi-Maskawa (CKM) mechanism predicts that a single parameter must be responsible for CP-violating phenomena in different quark sectors of the Standard Model (SM). Despite this minimal picture, challenged by non-SM physics, the CKM mechanism has been so-far verified in the bottom and strange sectors, but lacks tests in the complementary charm sector. For the sake of this, urgent theoretical progress is needed in order to provide an estimate in the SM of the recent measurement by LHCb of direct CP-violation in charm-meson two-body decays, which will be largely improved by new data expected along this decade from LHCb and Belle II. Re-scattering effects are particularly relevant for a meaningful theoretical account of the amplitudes involved in such observable, as signaled by the presence of large strong phases. I discuss the computation of the latter effects based on dispersion relations, and perform a global fit combination with the CKMfitter statistical package of available data on branching ratios and CP-asymmetries in order to assess the size of CP-violating contributions in the SM to charm-meson decays into $\pi \pi$ and $K K$.
In Lattice gauge theories , to calculate the PDFs from first principles
it is convenient to consider the Ioffe-time distribution defined through gauge-invariant bi-local operators with spacelike separation. Lattice calculations provide values for a limited range of the distance separating the bi-local operators. In order to perform the Fourier transform and obtain the pseudo- and the quasi-PDFs, it is then necessary to extrapolate the large-distance behavior.
I will discuss the formalism one may use to study the behavior of the Ioffe-time distribution at large distances and show that the pseudo-PDF and quasi-PDF are very different at this regime. Using light-ray operators, I will also show that the higher twist corrections of the quasi-PDF come in not as inverse powers of $P$ but as inverse powers of $x_B P$.
The measurement of neutral mesons in pp collisions allows a test of perturbative QCD calculations and represents an important baseline for heavy-ion studies. Neutral mesons are reconstructed in ALICE with multiple methods in a very wide range of transverse momenta and thus impose restrictions on the parton distribution functions and fragmentation functions over a wide kinematic region. Moreover, observations in high-multiplicity pp and p-Pb collisions show surprising similarities with those in heavy-ion collisions. Measured identified particle spectra in hard pp collisions give further insight into the hadron chemistry in such high charged-particle multiplicity events.
In this talk, detailed measurements of the neutral pion, eta and omega mesons will be presented in several multiplicity classes in pp collisions at $\sqrt{s}$ = 13 TeV. The different analysis techniques using two different calorimeters and the reconstruction of conversion photons via their $e^{+}e^{-}$ pairs will be briefly explained. In particular, the inclusion of the merged photon clusters analysis using the calorimeter allows the extension of the neutral pion measurement up to an unprecedented high $p_{\rm T}$ of 200 GeV/$c$ in pp and p-Pb collisions for identified hadron spectra. Results will be compared to pQCD calculations.
In this contribution, we present the latest measurements of $\mathrm{D}^0$, $\mathrm{D}^+$ and $\mathrm{D_s}^+$ mesons together with the final measurements of $\Lambda_\mathrm{c}^+$, $\Xi_\mathrm{c}^{0,+}$, $\Sigma_\mathrm{c}^{0,++}$, and the first measurement of $\Omega_\mathrm{c}^0$ baryons performed with the ALICE detector at midrapidity in pp collisions at $\sqrt{s}=5.02$ and $\sqrt{s}=13$ TeV. Recent measurements of charm-baryon production at midrapidity in small systems show a baryon-to-meson ratio significantly higher than that in $\mathrm{e^+e^-}$ collisions, suggesting that the fragmentation of charm is not universal across different collision systems. Thus, measurements of charm-baryon production are crucial to study the charm quark hadronization in a partonic rich environment like the one produced in pp collisions at the LHC energies.
Furthermore, the recent $\Lambda_\mathrm{c}^+/\mathrm{D}^0$ yield ratio, measured down to $p_\mathrm{T}=0$, and the new $\Xi_\mathrm{c}^{0,+}/\mathrm{D}^0$ yield ratio in p-Pb collisions will be discussed. The measurement of charm baryons in p-nucleus collisions provides important information about possible additional modification of hadronization mechanisms as well as on Cold Nuclear Matter effects and on the possible presence of collective effects that could modify the production of heavy-flavour hadrons.
Finally, the first measurements of charm fragmentation fractions and charm production cross-section at midrapidity per unit of rapidity will be shown for both pp and p-Pb collisions using all measured single charm ground state hadrons.
I will discuss nonperturbative flavor correlations between pairs of leading and next-to-leading charged hadrons within jets at the Electron-Ion Collider (EIC). We introduce a charge correlation ratio observable $r_c$ that distinguishes same- and opposite-sign charged pairs. Using Monte Carlo simulations with different event generators, $r_c$ is examined as a function of various kinematic variables for different combinations of hadron species, and the feasibility of such measurements at the EIC is demonstrated. I will also discuss the correlation between leading hadrons and leading subjets which encodes the transition between perturbative and nonperturbative regimes. The precision hadronization study we propose will provide new tests of hadronization models and hopefully lead to improved quantitative, and perhaps eventually analytic, understanding of nonperturbative QCD dynamics.
The observation of 3-Jpsi production in a single pp collision is reported. The results are based on the data collected by the CMS experiment in 13 TeV pp collisions. The measured effective double parton scattering cross section is compared to the previous measurements.
The LHCb experiment at the LHC is suited for studying how hadrons are formed from scattered quarks and gluons, in energetic proton-proton collisions. The hadronization and fragmentation processes can be studied via measurements such as those involving jet substructure. Equipped with a forward spectrometer, the LHCb experiment achieves an excellent transverse momentum for charged tracks, that along with excellent particle identification capabilities offers a unique opportunity to measure with great precision hadronization variables. This talk will present measurements of identified hadrons within light quark-initiated jets as well as other ongoing QCD measurements at LHCb.
In many BSM theories the top quark is hypothesized to have an enhanced non-standard or extremely rare interaction with other SM particles. This presentation covers the latest CMS results in this regard, either from direct searches or precise measurements, including flavor-changing neutral current (FCNC) and tests of discrete symmetries with top quark.
The large integrated luminosity collected by the ATLAS detector at the highest proton-proton collision energy provided by LHC allows to probe the presence of new physics that could enhance the rate of rare processes in the SM. The LHC can therefore gain considerable sensitivity for Flavour Changing Neutral Current (FCNC) interactions of the top quark. In the SM, FCNC involving the top-quark decay to another up-type quark and a neutral boson are so small that any measurable branching ratio for such a decay is an indication of new physics. The ATLAS experiment has performed searches for FCNC couplings of the top quark with a photon, gluon, Z boson or Higgs boson. In this contribution, the most recent results are presented, which include the complete data set of 140/fb at 13 TeV collected at the LHC during run 2 (2015-2018). The large data set, together with improvements in the analysis, yields a strong improvement of the expected sensitivity compared to previous experiments and partial analyses of the LHC data.
KKMChh adapts the CEEX (Coherent Exclusive Exponentiation) of the Monte Carlo Program KKMC for Z boson production and decay to hadron scattering. Amplitude-level soft photon exponentiation of initial and final state radiation, together with initial-final interference, is matched to a perturbative calculation to second order next-to-leading logarithm, and electroweak corrections to the hard process are included via DIZET. The first release of KKMChh included complete initial state radiation calculated with current quark masses. This version assumes idealized pure-QCD PDFs with negligible QED contamination. Traditional PDFs neglect QED evolution but are not necessarily free of QED influence in the data. QED-corrected PDFs provide a firmer starting point for precision QED work. We describe a new procedure for matching KKMChh's initial state radiation to a QED-corrected PDF, and compare this to earlier approaches. Some phenomenological applications are described.
The weak mixing angle is a probe of the vector-axial coupling structure of electroweak interactions. It has been measured precisely at the Z-pole by experiments at the LEP and SLD colliders, but its energy dependence above $m_Z$ remains unconstrained.
In this contribution we propose to exploit measurements of Neutral-Current Drell-Yan production at the Large Hadron Collider at large invariant dilepton masses to determine the scale dependence of the weak mixing angle in the MSbar renormalisation scheme, $sin^2\theta_W(\mu)$.
Such a measurement can be used to confirm the Standard Model predictions for the MSbar running at TeV scales, and to set model-independent constraints on new states with electroweak quantum numbers.
To this end, we present an implementation of $sin^2\theta_W(\mu)$ in a Monte Carlo event generator in Powheg-Box, which we use to explore the potential of future dedicated analyses with the LHC Run3 and High-Luminosity datasets.
In particular, we study the impact of higher order electroweak corrections and of uncertainties due the knowledge of parton distribution functions.
In this talk, we present the analytic evaluation of the virtual corrections
to the di-muon production in electron-positron collision in QED, up to the second order in fine structure constant, retaining the full dependence on
the muon mass and considering the electron as a massless particle.
We discuss the computational details, and the high-level of automation it required,
from the diagram generation, to the amplitude decomposition, and to the evaluation of the master integrals,
along with the UV renormalization and the IR singularity structure.
We also present preliminary results on:
i) a crossing related process, such as the two-loop amplitude for muon-electron scattering in QED, relevant for the MUonE experiment;
ii) the extension to the process qqbar -> ttbar in QCD.
For both the FCC-ee and the ILC, to exploit properly the respective precision physics program, the theoretical precision tag on the respective luminosity will need to be improved from the analogs of the 0.054 % (0.061%) results at LEP at $M_Z$, where the former (latter) LEP result has (does not have) the pairs correction. At the FCC-ee at $M_Z$ one needs improvement to 0.01%, for example. We present an overview of the roads one may take to reach the required 0.01 % precision tag at the FCC-ee and of what the corresponding precision expectations would be for the FCC-ee$_{350}$, ILC$_{500}$, ILC$_{1000}$, and CLIC$_{3000}$ setups.
The international FCC study group published in 2019 a Conceptual Design Report for an electron-positron collider with a centre-of-mass energy from 90 to 365 GeV, a circumference of 98 km and beam currents of up to 1.4 A per beam. The high beam current of this collider create challenging requirements on the injection chain and all aspects of the linac need to be carefully reconsidered and revisited, including the injection time structure. The entire beam dynamics studies for the full linac, damping ring and transfer lines are major activities of the injector complex design. A key point is that any increase of positron production and capture efficiency reduces the cost and complexity of the driver linac, the heat and radiation load of the converter system, and increases the operational margin. The PSI Positron Production (P_cubed) project, currently in development at PSI, is the proposed proof-of-principle experiment for a potential FCC-ee positron source. Capture and transport of the secondary positron beam from the production target to the damping ring are a key challenge for FCC-ee, due to large emittance and energy spread. The use of novel matching and focusing methods has been studied, such as high temperature superconducting (HTS) solenoids, where recent simulations show considerably higher positron yield with respect to the state of the art. The experiment is to be hosted at SwissFEL at PSI, where a 6 GeV electron beam and a tungsten target can be used to generate the positron distribution. In this contribution we will give an overview of the status of the injector complex study and will introduce the P3 project both developed in the context of the CHART collaboration.
In this talk the current status and plans are presented on the LHeC accelerator concept, towards the new HEP strategy update in about 5 years time. We review the ERL and the IR including the possibility of a joint $eh/hh$ interaction region. The talk also covers FCC-he and refers to a separate presentation of the ERL demonstration facility PERLE. It is based on the comprehensive Conceptual Design Report update [1] and the recent work [2].
[1] P. Agostini et al. (LHeC Study Group), The Large Hadron-Electron Collider at the HL-LHC, J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
[2] K. D. J. Andre et al., An experiment for electron-hadron scattering at the LHC, Eur. Phys. J. C 82 (2022) 1, 40, e-Print: 2201.02436 [hep-ex].
The realisation of the LHeC and the FCC-he at CERN require the development of the energy recovering technique in multipass mode and for large current $\mathcal{O}(10)$ mA in the SRF cavities. For this purpose, a technology development facility, PERLE, is under design to be built at IJCLab Orsay, which has the key LHeC ERL parameters in terms of configuration, source, current, frequency and technical solutions, cryomodule, stacked magnets. In this talk we review the design and comment on the status of PERLE.
Electron-hadron colliders are the ultimate tool for high-precision quantum chromodynamics studies and for probing the internal structure of hadrons. The Hadron Electron Ring Accelerator HERA (DESY, Hamburg, Germany) was the first and up to now only electron-hadron collider ever operated (1991-2007). In 2019 the U.S. Department of Energy initiated the Electron-Ion Collider (EIC) project, the next electron-hadron collider currently under construction at BNL (Upton, NY) in partnership with JLab (Newport News, VA). The EIC builds on the infrastructure of the current Relativistic Heavy Ion Collider (RHIC) complex at BNL. The EIC will collide 5 to 18 GeV polarized electrons with 41 to 275 GeV polarized protons, polarized light ions with energies up to 166 GeV/u, and unpolarized heavy ions up to 110 GeV/u. The EIC is a high-luminosity collider designed to provide 10^34 cmโ2 sโ1 at 105 GeV center-of-mass energy collisions between electrons and protons. The project scope includes one colliding region with its detector but two colliding regions are feasible. This talk will give an overview of the EIC design, main technological challenges and timeline.
The Future Circular Collider (FCC) study was launched as a worldwide international collaboration hosted by CERN with the goal of pushing the field to the next energy frontier beyond the LHC. The mass of particles that could be directly produced is increased by almost an order of magnitude, and the subatomic distances to be studied are decreased by the same proportion. FCC covers two accelerators, namely an energy-frontier hadron collider (FCC-hh) and a highest luminosity, high-energy lepton collider (FCC-ee), sharing the same 100 km tunnel infrastructure. This talk focuses on the FCC-hh, summarising its key features such as accelerator design, performance reach, and underlying technologies. The proposed vision is based on the conceptual design report, which represents a milestone of this study but also covers more recent design activities.
As part of the Physics Beyond Collider study group, the CERN Gamma Factory is an innovative proposal to exploit the potential of CERN to accelerate at ultra-relativistic energies partially stripped ion with high intensity such that their low lying atomic levels be excited by state of art optical systems. This may enable a very broad range of new applications from atomic physics to particle physics, including their applicative counterparts thanks to the production of high-energy photon beams (up to 400 MeV) with unprecedented intensity (up to $10^{18}$ photons per second). A large variety of theoretical developments have reinforced the interest of the community in this project in the past two years as shown in the special issue of Annalen der Physik (https://onlinelibrary.wiley.com/toc/15213889/2022/534/3). Recent progress towards the realization of a proof of principle experiment at the CERN SPS will be shown.
The observations of the Advanced LIGO and Advanced Virgo gravitational-wave detectors have led so far to the confident identification of 90 signals, from the merger of compact binary systems constituted of black holes and neutron stars. These events have offered a new testing ground for General Relativity and better insights into the nuclear equation of state for neutron stars, as well as the discovery of a new population of black holes. For each detection, a thorough event validation procedure has been completed in order to carefully assess the impact of potential data quality issues, such as instrumental artefacts, on the analysis results. This has increased the confidence in the astrophysical origin of the observed signals, as well as in the accuracy of the estimated source parameters. In this presentation, we will describe the most relevant steps of the validation process, in the context of the last observing run (O3) of the Advanced gravitational-wave detectors. Moreover, these detectors are currently ongoing a phase of upgrades in preparation for the next joint observing run (O4), scheduled to begin in December 2022. The predicted improvement in sensitivity is expected to produce a higher rate of candidate events, which will constitute a new challenge for the validation procedures.
Sources of geophysical noise (such as wind, sea waves and earthquakes) or of anthropogenic noise (nearby activities, road traffic, etc.) impact ground-based gravitational-wave (GW) interferometric detectors, causing transient sensitivity worsening and gaps in data taking.
During the one year-long third Observing Run (O3: from April 01, 2019 to March 27, 2020), the Virgo Collaboration collected a large dataset, which has been used to study the response of the Advanced Virgo detector to a variety of environmental conditions. We correlated environmental parameters to global detector performance, such as observation range (the live distance up to which a given GW source could be detected), duty cycle and control losses (losses of the global working point, the instrument configuration needed to observe the cosmos). Where possible, we identified weaknesses in the detector that will be used to elaborate strategies in order to improve Virgo robustness against external disturbances for the next data taking period, O4, currently planned to start at the end of 2022. The lessons learned could also provide useful insights for the design of the next generation of ground-based interferometers.
The associated article has been posted to arXiv recently (https://arxiv.org/abs/2203.04014) and submitted to a journal.
The characteristics of the cosmic microwave background provide circumstantial evidence that the hot radiation-dominated epoch in the early universe was preceded by a period of inflationary expansion. Here, it will be shown how a measurement of the stochastic gravitational wave background can reveal the cosmic history and the physical conditions during inflation, subsequent pre- and reheating, and the beginning of the hot big bang era. This will be exemplified with a particularly well-motivated and predictive minimal extension of the Standard Model which is known to provide a complete model for particle physics -- up to the Planck scale, and for cosmology -- back to inflation.
In 2006, A. Cohen and S. Glashow presented for the first time the idea of Very Special Relativity (VSR), where they imagined to restrict space-time invariance to a subgroup of the full Lorentz group, usually the subgroup $SIM(2)$. The advantage of this theory is that, while it does not affect the classical prediction of Special Relativity, it can explain the existence of neutrino masses without the addition of new exotic particles or tiny twisted space dimensions, which until now have not been observed in experiments.
The addition of either $P$, $CP$, or $T$ invariance to $SIM(2)$ symmetry enlarges the entire symmetry group again to the whole Lorentz group. That implies the absence of VSR effects in theories where one of the above three discrete transformations is conserved.
Since we know thanks to Sakharov conditions that these discrete symmetries must be broken in cosmology, the effects of VSR in this framework become worthy of being studied. With our work, we managed to construct a $SIM(2)$-invariant version of linearized gravity, describing the dynamics of the space-time perturbation field $h_{\mu \nu}$. Such a theory may be used as a starting point for the study of VSR consequencies in the propagation of gravitational waves in a Lorentz breaking background.
In the end, our analysis will correspond to a massive graviton model. That could be of great interest due to the various recent applications that are being explored for massive gravity, from dark matter to cosmology, despite the strong boundaries we already have on the graviton mass.
Until now, massive gravity models were usually constructed as Lorentz invariant. Nevertheless, as in the case of Electromagnetism and the Proca Theory, there is no way of trivially preserving both Lorentz and Gauge invariance when giving mass to the graviton.
Giving up on the Gauge invariance directly leads to the appearance of three additional degrees of freedom (D.o.F.) respect to the ones of General Relativity (GR), which are responsible for different pathologies of these theories, like the vDVZ discontinuity and ghost modes (i.e. the Boulware-Deser ghost). Many of these problems have already been solved with the Vainshtein Mechanism and the fine-tuned dRGT action to avoid ghosts, making dRGT massive gravity a good candidate to solve the cosmological constant problem. Even so, dealing with cosmology brings up new problems and instabilities which have not already been solved.
Giving up on Lorentz invariance, that is what we considered in our work by implementing VSR, is the other viable possibility for massive gravity. Experience with VSR Electrodynamics and VSR massive Neutrinos tell us that VSR extensions avoid the introduction of ghosts in the spectrum: in fact, as we will see, gauge invariance of our formulation does not allow for new additional D.o.F. other than the usual two of the massless graviton, getting round most of the problems cited above, like the Boulware-Deser ghost. Nevertheless, these advantages come at the price of considering new non-local terms in the theory and assuming a preferred space-time null direction, represented by the lightlike four-vector $n^\mu$.
Finally, through the geodesic deviation equation, we have confronted some results for classic gravitational waves (GW) with the VSR ones: we see that the ratios between VSR effects and classical ones are proportional to $(m_g/E)^2$, $E$ being the energy of a graviton in the GW. For GW detectable by the interferometers LIGO and VIRGO this ratio is at most $10^{-20}$. However, for GW in the lower frequency range of future detectors, like LISA, the ratio increases signficantly to $ 10^{-10}$, that combined with the anisotropic nature of VSR phenomena may lead to observable effects.
Gravitational-wave (GW) cosmology provides a new way to measure the expansion history of the Universe, based on the fact that GWs are direct distance tracers. This property allows at the same time to test gravity at cosmological scales, since in presence of modifications of General Relativity the distance inferred from GWs is modified - a phenomenon known as ''modified GW propagation''. On the other hand, obtaining the redshift (whose knowledge is essential to test cosmology) is the challenge of GW cosmology. In absence of a direct electromagnetic counterpart to the GW event, the source goes under the name of ''dark siren'' and statistical techniques are used.
In this talk, I will present measurements of the Hubble parameter and bounds on modified GW propagation, obtained from the latest Gravitational Wave Transient Catalog 3 with new, independent, open-source codes implementing the statistical correlation between GW events and galaxy catalogues and information from the mass distribution of binary black holes.
I will discuss methodological aspects, relevant sources of systematics, the interplay with population studies, current challenges and possible ways forward.
I will finally present perspectives for the use of statistical dark siren techniques with third generation (3G) ground-based GW detectors, in particular the Einstein Telescope observatory.
In this talk, I will evaluate the potential for extremely high-precision astrometry of a small number of non-magnetic, photometrically stable hot white dwarfs (WD) located at $\sim$ kpc distances to access interesting sources in the gravitational-wave (GW) frequency band from 10 nHz to 1 $\mu$Hz. Previous astrometric studies have focused on the potential for less precise, large-scale astrometric surveys; the work I will discuss provides an alternative optimization approach to this problem. I will show that photometric jitter from starspots on WD of this type is bounded to be small enough to permit such an approach, and discuss possible noise arising from stellar reflex motion induced by orbiting objects. Interesting sources in this band are expected at characteristic strains around $h_c \sim 10^{-17} \times \left( \mu\text{Hz} / f_{\text{GW}} \right).$ I will outline the mission parameters needed to obtain the requisite angular sensitivity for a small population of such WD, $\Delta \theta \sim h_c$ after integrating for $T\sim 1/f_{\text{GW}}$, and show that a space-based stellar interferometer with few-meter-scale collecting dishes and baselines of $\mathcal{O}(100 \text{km})$ is sufficient to achieve the target strain over at least half the band of interest. This collector size is broadly in line with the collectors proposed for some formation-flown, space-based astrometer or optical synthetic-aperature imaging array concepts; the proposed baseline is however somewhat larger than the km-scale baselines discussed for those concepts. The ability to probe GWs with such a mission bolsters its science case.
Leptoquarks are ubiquitous in several extensions of the Standard Model and seem to be able to accommodate the universality-violation-driven $B$-meson-decay anomalies and the $(g-2)_\mu$ discrepancy interpreted as deviations from the Standard Model predictions. In addition, the search for lepton-flavour violation in the charged sector is, at present, a major research program that could also be facilitated by the dynamics generated by leptoquarks. In this work, we considered a rather wide framework of both scalar and vector leptoquarks as the generators of lepton-flavour violation in processes involving the tau lepton. We singled out its couplings to leptoquarks, thus breaking universality in the lepton sector, and we integrated out leptoquarks at tree level, generating the corresponding dimension-6 operators of the Standard Model Effective Field Theory. In the previous work of $\textit{T. Husek, K. Monsรกlvez-Pozo and J. Portolรฉs}$ DOI: 10.1007/JHEP01(2021)059, we obtained model-independent bounds on the Wilson coefficients of those operators contributing to lepton-flavour-violating hadron tau decays and $\ell$--$\tau$ conversion in nuclei, with $\ell=e,\mu$. Hence, here we used those results to translate the bounds into the couplings of leptoquarks to the Standard Model fermions.
We study the impact of triple-leptoquark interactions on matter stability for two specific proton decay topologies that arise at the tree- and one-loop level if and when they coexist. We demonstrate that the one-loop level topology is much more relevant than the tree-level one when it comes to the proton decay signatures despite the usual loop-suppression factor. We subsequently present detailed analysis of the triple-leptoquark interaction effects on the proton stability within one representative scenario to support our claim, where the scenario in question simultaneously features a tree-level topology that yields three-body proton decay pโe+e+eโ and a one-loop level topology that induces two-body proton decays pโฯ0e+ and pโฯ+ฮฝยฏ. We also provide a comprehensive list of the leading-order proton decay channels for all non-trivial cubic and quartic contractions involving three scalar leptoquark multiplets that generate triple-leptoquark interactions of our interest, where in the latter case one of the scalar multiplets is the Standard Model Higgs doublet.
We examine new aspects of leptoquark (LQ) phenomenology using effective field theory (EFT). We construct a complete set of leading effective operators involving SU(2) singlets scalar LQ and the Standard Model (SM) fields up to dimension six. We show that, while the renormalizable LQ-lepton-quark interaction Lagrangian can address the persistent hints for physics beyond the SM in the B-decays and in the measured anomalous magnetic moment of the muon, the LQ higher dimensional effective operators may lead to new interesting effects associated with lepton number violation. These include the generation of one-loop and two-loops sub-eV Majorana neutrino masses, mediation of neutrinoless double-ฮฒ decay and novel LQ collider signals. For the latter, we focus on third generation LQ ($\phi_3$) in a framework with an approximate $Z_3$ generation symmetry and show that one class of the dimension five LQ operators may give rise to a striking asymmetric same-charge $\phi_3 \phi_3$ pair-production signal, which leads to low background same-sign di-leptons signals at the LHC. For example, if the LQ mass is around 1 TeV and the new physics scale is ฮ โผ 5 TeV, then we expect about 5000 positively charged $\tau^+ \tau^+$ events via $pp \to \phi_3 \phi_3 \to \tau^+ \tau^+ + 2 \cdot j_b$ ($j_b = b$-jet), about 500 negatively charged $\tau^- \tau^-$ events with a signature $pp \to \phi_3 \phi_3 \to \tau^- \tau^- + 4 \cdot j + 2 \cdot j_b$ ($j=$ light jet) and about 50 positively charged $\ell^+ \ell^+$ events via $pp \to \ell^+ \ell^+ + 2 \cdot j_b + MET$ ($\ell = e,\mu,\tau$), at the 13 TeV LHC with an integrated luminosity of 300 fb$^{โ1}$. It is interesting to note that, in the LQ EFT framework, the expected same-sign lepton signals have a rate which is several times larger than the QCD LQ-mediated opposite-sign leptons signals, $gg, q \bar q \to \phi_3 \phi_3^\star \to \ell^+ \ell^- +X$.
Multi-lepton signals provide a relatively clean and rich testing ground for new physics (NP) at the LHC and, in particular, for searching for lepton flavor universality violation (LFUV) effects mediated by new heavy states of an underlying TeV-scale NP. The potential sensitivity of 3rd generation fermions (the top-quark in particular) to TeV-scale NP along with the persistent anomalies in B-decays, the recently confirmed muon g-2 anomaly as well as hints reported recently by ATLAS and CMS of unequal di-muons versus di-electrons production, have led us to explore effects of higher-dimensional $(qq)(\ell \ell)$ 4-Fermi operators involving 3rd generation quarks and muons/electrons, on multi-leptons + jets production at the LHC. I will focused on the "tail โeffects" of both flavor-changing $(q_3 q_{1,2})(\ell \ell)$ and flavor-diagonal $(q_3 q_3)(\ell \ell)$ scalar, vector and tensor contact interactions, that are generated by tree-level exchanges of multi-TeV heavy states, and discuss the sensitivity of the LHC and a future HL-LHC to the scales of these 4-Fermi terms, $\Lambda(q_3 q \ell \ell)$, via these $pp \to$ multi-leptons + jets channels. In particular, I will show that by applying a sufficiently high invariant mass selection on the di-leptons from the $qq\ell\ell$ contact interaction and additional specific jet-selections designed to minimize the SM background, one can obtain a significantly better sensitivity than the current sub-TeV bounds on these type of NP.
The โ4321โ gauge models are promising extensions of the SM that give rise to the $๐_1$ vector leptoquark solution to the ๐ต-physics anomalies. Both the gauge and fermion sectors of these UV-constructions lead to a rich phenomenology currently accessible by the Large Hadron Collider. In this talk we describe some of the main LHC signatures and extract exclusion limits using run-II data. In addition, we also discuss a 4321 extension with a dark sector leading to a Majorana dark matter candidate and a coloured partner producing new signatures at the LHC.
Experimental hints for lepton flavor universality violation in beauty-quark decay both in neutral- and charged-current transitions require an extension of the Standard Model for which scalar leptoquarks (LQs) are the prime candidates. Besides, these same LQs can resolve the long-standing tension in the muon and the recently reported deviation in the electron $g-2$ anomalies. These tantalizing flavor anomalies have discrepancies in the range of $2.5\sigma-4.2\sigma$, indicating that the Standard Model of particle physics may finally be cracking. In this Letter, we propose a resolution to all these anomalies within a unified framework that sheds light on the origin of neutrino mass. In this model, the LQs that address flavor anomalies run through the loops and generate neutrino mass at the two-loop order while satisfying all constraints from collider searches, including those from flavor physics.
No stone can be left unturned in the search for new physics beyond the standard model (BSM). Since no indication of new physics was found yet, and the resources in hand are limited, we must devise novel avenues for discovery. We propose a Data-Directed Paradigm (DDP), whose principal objective is to direct dedicated analysis efforts towards regions of data which hold the highest potential for discoveries leading to BSM physics.
The DDP is a different search paradigm, in complete contrast but complementary to the currently dominant theory-driven blind analysis search paradigm. It could reach discoveries that are currently blocked by the waste of resources involved in the blind analysis dogma. After investing hundreds of persons-years, impressive bounds on BSM scenarios have been set. However, this paradigm also limited the number of searches conducted, leaving large potential of the data unexplored. One representative example is that of the search for di-lepton resonances, where searches targeting exclusive regions of the data (di-lepton+X) are hardly conducted. Focusing on the Data, the DDP allows identifying rapidly whether the data in a given region exhibit significant deviations from a well-established property of the Standard Model (SM). Thus, ideally, an unlimited number of final states can be tested, expanding considerably our discovery reach.
Based on the work presented in [1] and [2], we propose developing the DDP for two SM properties. The first is the fact that in absence of resonances, most invariant mass distribution are smoothly falling. Along the di-lepton example, we propose identifying which of the many di-lepton+X selections is more likely to hide a resonance. The second property is the flavour symmetry of the SM, the fact that, in absence of BSM physics, the LHC data should be approximately symmetric to the replacement of prompt electrons with prompt muons. Once consolidated, we will conduct the two DDP searches and explore regions of the ATLAS data that otherwise might remain unexplored.
The DDP search paradigms and itโs suggested realizations will be discussed.
[1] S. Volkovich, F. De Vito Halevy, S. Bressler, โA data-directed paradigm for BSM searches: the bump-hunting exampleโ, Eur.Phys.J.C 82 (2022) 3, 265
[2] M. Birman, B. Nachman, R. Sebbah, G. Sela, O. Turetz, S. Bressler, โData-Directed Search for New Physics based on Symmetries of the SMโ, [arXiv:2203.07529], submitted for publication.
We present an overview of searches for new physics with top and bottom quarks in the final state, using proton-proton collision data collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV. The results cover non-SUSY based extensions of the SM, including heavy gauge bosons or excited third generation quarks. Decay channels to vector-like top partner quarks, such as T', are also considered. We explore the use of jet substructure techniques to reconstruct highly boosted objects in events, enhancing the sensitivity of these searches.
The Dark Higgs model is an extension of the Standard Model that describes the phenomenology of dark matter while respecting the SM gauge symmetries. This new approach opens regions of parameter space that are less covered by searches optimized for simpler models of dark matter. In this talk, we present such searches from CMS, focusing on the recent results obtained using the full Run-II dataset collected at the LHC.
Searches in CMS for dark matter in final states with invisible particles recoiling against visible states are presented. Various topologies and kinematic variables are explored, including jet substructure as a means of tagging heavy bosons. In this talk, we focus on the recent results obtained using the full Run-II dataset collected at the LHC.
The LHCb detector at the LHC offers unique coverage of forward rapidities. The detector also has a flexible trigger that enables low-mass states to be recorded with high efficiency, and a precision vertex detector that enables excellent separation of primary interactions from secondary decays. This allows LHCb to make significant (and world-leading) contributions in these regions of phase space in the search for long-lived particles that would be predicted by dark sectors which accommodate dark matter candidates. A selection of results from searches of heavy neutral leptons, dark photons, hidden-sector particles, and dark matter candidates produced from heavy-flavour decays among others will be presented, alongside the potential for future measurements in some of these final states.
The presence of a non-baryonic Dark Matter (DM) component in the Universe is inferred from the observation of its gravitational interaction. If Dark Matter interacts weakly with the Standard Model (SM) it could be produced at the LHC. The ATLAS experiment has developed a broad search program for DM candidates, including resonance searches for the mediator which would couple DM to the SM, searches with large missing transverse momentum produced in association with other particles (light and heavy quarks, photons, Z and H bosons, as well as additional heavy scalar particles) called mono-X searches and searches where the Higgs boson provides a portal to Dark Matter, leading to invisible Higgs decays. The results of recent searches on 13 TeV pp data, their interplay and interpretation will be presented.
The discovery of dark matter is one of the challenges of high-energy physics in the collider era. Many Beyond-Standard Model theories predict dark matter candidates associated with the production of a single top-quark in the final state, the so-called mono-top. A search for events with one top quark and missing transverse energy in the final state is presented. This analysis explores the fully hadronic decay of the top-quark, requiring large missing transverse energy and a boosted large-radius jet in the final state. A Boosted-Decision Tree is used to discriminate the background (mostly coming from top pair production and vector boson production in association with jets) from mono-top signal events. Two alternative interpretations of the obtained results were done, namely the production of a generic dark matter particle and the single production of a vector-like T quark. The analysis makes use of data collected with the ATLAS experiment at $\sqrt{s}$ = 13 TeV during LHC Run-2 (2015-2018) and corresponding to an integrated luminosity of 139 fb-1. This analysis is expected to improve the existing limits on the mass of the dark matter candidate from the considered model. New exclusion limit contours in the model parameter space are also foreseen.
Belle has unique reach for a broad class of models that postulate the existence of dark matter particles with MeVโGeV masses. This talk presents recent world-leading physics results from Belle II searches for dark Higgstrahlung and invisible $Z^{\prime}$ decays; as well as the near-term prospects for other dark-sector searches.
The Belle II experiment is taking data at the asymmetric SuperKEKB collider, which operates at the Y(4S) resonance. The vertex detector is composed of an inner two-layer pixel detector (PXD) and an outer four-layer double-sided strip detector (SVD). The SVD-standalone tracking allows the reconstruction and identification, through dE/dx, of low transverse momentum tracks. The SVD information is also crucial to extrapolate the tracks to the PXD layers, for efficient online PXD-data reduction.
A deep knowledge of the system has been gained since the start of operations in 2019 by assessing the high-quality and stable reconstruction performance of the detector. Very high hit efficiency, and large signal-to-noise are monitored via online data-quality plots. The good cluster-position resolution is estimated using the unbiased residual with respect to the track, and it is in reasonable agreement with the expectations.
Currently the SVD average occupancy, in its most exposed part, is still < 0.5%, which is well below the estimated limit for acceptable tracking performance. With higher machine backgrounds expected as the luminosity increases, the excellent hit-time information will be exploited for background rejection, improving the tracking performance. The front-end chip (APV25) is operated in โmulti-peakโ mode, which reads six samples. To reduce background occupancy, trigger dead-time and data size, a 3/6-mixed acquisition mode based on the timing precision of the trigger has been successfully tested in physics runs.
Finally, the SVD dose is estimated by the correlation of the SVD occupancy with the dose measured by the diamonds of the radiation-monitoring and beam-abort system. First radiation damage effects are measured on the sensor current and strip noise, although they are not affecting the performance.
Belle II is a new-generation B-factory experiment operating at the beam intensity frontier, SuperKEKB accelerator, dedicated to exploring new physics beyond the standard model of elementary particles in the flavor sector. Belle II started data-taking in April 2018, using a synchronous data acquisition (DAQ) system based on a pipelined trigger flow control. Belle II DAQ system is designed to handle 30 kHz trigger rate, under the assumption of a raw event size 1 MB. Because the event size and rate could be larger than the designed value depending on the background condition, and the difficult maintainability of the current readout system during the Belle II entire operation period is expected, we decided to upgrade the Belle II DAQ readout system with state-of-art technology. A PCI-express based new-generation of readout board (PCIe40), which was originally developed for the upgrade of LHCb and ALICE experiments, has been used for the upgrade of Belle II DAQ system. PCIe40 is able to connect to a maximum of 48 frontend electronics through multi-gigabit serial links. PCI-express hard IP-based direct memory access architecture, the newly designed timing and trigger distribution system and slow control system made the Belle II readout setup as a compact system. Three out of 7 sub-detectors of Belle II experiment has been operated with the upgraded DAQ system. In this submission we present the development of firmware and software for the new Belle II DAQ system, and its operation performance during physics data-taking.
The Belle II experiment at the SuperKEKB e+e- collider has started data taking in 2018 with the perspective of collecting 50ab-1 during the next several years. The detector is working well with very good performance, but the first years of running are showing novel challenges and indicate the need for an accelerator consolidation and upgrade to reach the target luminosity of 6E35 cm-2s-1, which might require a long shutdown in the timeframe of 2026-2027. To fully exploit physics opportunities, and to ensure reliable and efficient detector operations, Belle II has started to define a detector upgrade program to make the various sub-detectors more robust and performant even in the presence of high backgrounds, facilitating the SuperKEKB running at high luminosity.
This upgrade program will possibly include the replacement of some readout electronics, the upgrade of some detector elements, and may also involve the substitution of entire detector sub-systems such as the vertex detector. The process has started with the submission of Expressions Of Interest that are being reviewed internally and will proceed towards the preparation of a Conceptual Design Report currently planned for the beginning of 2023. This paper will cover the full range of proposed upgrade ideas and their development plans.
The addition of a Forward Calorimeter (FoCal) to the ALICE experiment is proposed for LHC Run 4 to provide unique constraints on the low-x gluon structure of protons and nuclei via forward measurements of direct photons. A new high-resolution electromagnetic Si-W calorimeter using both Si-pad and Si-pixel layers is being developed to discriminate single photons from pairs of photons originating from $\pi^0$ decays. A conventional sampling hadron calorimeter is foreseen for jet measurements and the isolation of direct photons. In this presentation, we will report on results from test beam campaigns in 2019 and 2021 at DESY and CERN with Si-pad and pixel modules, a first prototype for the hadronic calorimeter, and a full-pixel calorimetry prototype based on ALPIDE sensors.
After the successful installation and first operation of the upgraded Inner Tracking System (ITS2), which consists of about 10 m2 of monolithic silicon pixel sensors, ALICE is pioneering the usage of bent, wafer-scale pixel sensors for the ITS3 for Run 4. Sensors larger than typical reticle sizes can be produced using the technique of stitching. At thicknesses of about 30 ยตm, the silicon is flexible enough to be bent to radii of the order of 1 cm. By cooling such sensors with a forced air flow, it becomes possible to construct truly cylindrical layers which consist practically only of the silicon sensors. The reduction of the material budget and the improved pointing resolution will allow new measurements, in particular of heavy-flavour decays and electromagnetic probes. In this presentation, we will report on the sensor developments, the performance of bent sensors in test beams, and the mechanical studies on truly cylindrical layers.
ALICE 3 is proposed as the next-generation experiment to address unresolved questions about the quark-gluon plasma by precise measurements of heavy-flavour probes as well as electromagnetic radiation in heavy-ion collisions in LHC Runs 5 and 6. In order to achieve the best possible pointing resolution a concept for the installation of a high-resolution vertex tracker in the beampipe is being developed. It is surrounded by a silicon-pixel tracker covering roughly 8 units of pseudorapidity. To achieve the required particle identification performance, a combination of a time-of-flight system and a Ring-Imaging Cherenkov detector is foreseen. Further detectors, such as an electromagnetic calorimeter, a muon identifier, and a dedicated forward detector for ultra-soft photons, are being studied. In this presentation, we will explain the detector concept and its physics reach as well as discuss the R&D challenges.
In this contribution the nuclear modification factor ($R_\mathrm{AA}$) of prompt charm hadrons and heavy-flavour hadrons decaying to leptons measured in Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02$ TeV by the ALICE Collaboration are presented. The measurement of heavy-flavour leptons in Xe-Xe collisions is also discussed. Heavy quarks are a very suitable probe to investigate the quark--gluon plasma (QGP) produced in heavy-ion collisions, since they are mainly produced in hard-scattering processes and hence in shorter timescales compared to the QGP. Measurements of charm-hadron production in nucleus--nucleus collisions are therefore useful to study the properties of the in-medium charm-quark energy loss via the comparison with theoretical models. Moreover, the comparison of different colliding systems provide insights in the dependency on the collision geometry.
Models describing the heavy-flavour transport and energy loss in an hydrodynamically expanding QGP require also a precise modelling of the in-medium hadronisation of heavy quarks, which is investigated via the measurement of prompt $\mathrm{D_s^+}$ mesons and $\Lambda_\mathrm{c}^{+}$ baryons.
In addition, the measurement of the azimuthal anisotropy of strange and non-strange D mesons is discussed. The second harmonic coefficient provides information about the degree of thermalisation of charm quarks in the medium, while the third one relates to its sensitivity to event-by-event fluctuations in the initial stage of the collision.
A thorough systematic comparison of experimental measurements with phenomenological model calculations will be performed in order to disentangle different model contributions and provide important constraints to the charm-quark diffusion coefficient $D_s$ in the QGP.
We report the first measurement of the azimuthal angular correlation between jets and $D^0$ mesons in pp and PbPb collisions. The measurement is performed using jets with $p_\mathrm{T}> 60$ GeV and $D^0$ mesons with $p_\mathrm{T} > 4$ GeV. The azimuthal angle difference between jets and $D^0$ mesons($0<\Delta\phi<\pi$) is sensitive to medium-induced charm diffusion, charm quark energy loss, and possible rare large-angle scattering between charm and the quasi-particles in QGP. We also report the radial profile of the charm quark with respect to the jet axis measured differentially in centrality and $D^0 p_\mathrm{T}$. This analysis is performed with high-statistics Run 2 data collected by the CMS detector.
Heavy quarks are primarily produced via initial hard scatterings, and thus carry information about the early stages of the Quark-Gluon Plasma (QGP). Measurements of the azimuthal anisotropy of the final-state heavy flavor hadrons provide information about the initial collision geometry, its fluctuation, and more importantly, the mass dependence of energy loss in QGP. Due to the larger bottom quark mass as compared to the charm quark mass, separate measurements of charm and bottom hadron azimuthal anisotropy can shed new light on understanding the dependence of the heavy quark and medium interaction. Because of the high branching ratio and large $D^0$ mass, measurements of $D^0$ meson coming from $B$ hadron decay (nonprompt $D^0$) can cover a broad kinematic range and be a good proxy of the parent bottom hadrons results. In this talk we report both on the prompt $D^0$ and the first nonprompt $D^0$ measurements of the azimuthal anisotropy elliptic ($v_2$) and triangular ($v_3$) coefficients of nonprompt $D^0$ in PbPb collisions at $\sqrt{s_{_{\mathrm{NN}}}} =$ 5.02 TeV. The measurements are performed as functions of transverse momentum $p_\mathrm{T}$, in three centrality classes, from central to midcentral collisions.. Compared to the prompt $D^0$ results, the nonprompt $D^0$ $v_2$ flow coefficients are systematically lower but have a similar dependence on $p_\mathrm{T}$ and centrality. A non-zero $v_3$ coefficient of the nonprompt $D^0$ is observed. The obtained results are compared with theoretical predictions. The comparison could provide new constraints on the theoretical description of the interaction between heavy quarks and the medium.
In this contribution, the final measurements of the centrality dependence of $R_{\rm AA}$ of non-prompt $\mathrm{D}^0$ and electrons from beauty hadron decays in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV will be presented. These measurements provide important constraints to the in-medium mass-dependent energy loss and hadronization of the beauty quark. The integrated non-prompt $\mathrm{D}^0$ $R_{\rm AA}$ will be presented for the first time and will be compared with the prompt $\mathrm{D}^0$ one. This comparison will shed light on possible different shadowing effects between charm and beauty quarks. In addition, the first measurements of non-prompt $\mathrm{D}_{s}$ production in central and semi-central Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV will be discussed. The non-prompt $\mathrm{D}_{s}$ measurements provide additional information on the production and hadronization of $\mathrm{B}_{s}$ mesons. Finally, the first measurement of non-prompt D-mesons elliptic flow in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV will also be discussed. It will help to further investigate the degree of thermalization of beauty quark in the hot and dense QCD medium.
Measurements of jet constituent distributions for light- and heavy flavor jets are used successfully for experimental QCD studies with high energy pp collisions at the LHC. These studies are now extended to explore the flavor dependence of the jet quenching phenomenon. The jet quenching, one of the signatures of the quark-gluon plasma, is well established through experimental measurements at RHIC and LHC. However, the details of the expected dependence of jet-medium interactions on the flavor of the parton initiating the shower are not yet settled. This talk presents the first b jet shapes measurements from 5 TeV PbPb and pp collisions collected by the CMS. Comparisons made with jet shapes of inclusive jets, produced predominantly by light quarks and gluons, allow experimental observations of a โdead coneโ effect in suppressing in-jet transverse momenta of constituents at small radial distance R from the jet axis. A similar comparison for large distances provides insights on the role of parton mass in the energy loss and possible mass-dependence of medium response.
Beauty quark is one of the best probes of the Quark Gluon Plasma. Its large mass allows to probe the QGP transport properties in the heavy flavor sector through energy loss and diffusion. However, the hadronization of beauty is not as well understood as that of charm due to the smaller cross-section. Clarifying the hadronization mechanism is crucial for understanding the transport properties in QGP extracted from beauty hadron (and their decay product) spectra. In this talk, we will present new results on nuclear modification factors of $B^0_s$ and $B^+$ mesons and their yield ratios in pp and PbPb collisions at 5.02 TeV using the data recorded with the CMS detector in 2017 and 2018. The accuracy is significantly improved with respect to the previously published results. The reported B mesons nuclear modification factors over an extended transverse momentum range will provide important information about the diffusion of beauty quark and the flavor dependence of in-medium energy loss. The $B^0_s/ B^+$ yield ratio in pp and PbPb can shed new light on the mechanisms of beauty recombination in vacuum and in medium. It will also provide an important input to understand the hadronization mechanism of beauty quark, testing the QCD factorization theorem at the LHC energy.
We have investigated the many-body equations of $D$ and $\bar{B}$ mesons in a thermal medium by applying an effective field theory based on chiral and heavy-quark spin symmetries. Exploiting these symmetries within the kinetic theory, we have derived an off-shell Fokker-Planck equation which incorporates information of the full spectral function of these states.
I will present the latest results on heavy-flavor transport coefficients below the chiral restoration temperature. I will also detail the origin of the in-medium reactions which contribute to the heavy-meson thermal width and energy loss, including the soft-pion emission (Bremsstrahlung) process.
This talk will cover the latest searches for non resonant double Higgs boson production at CMS and interpretations in terms of the Higgs self-coupling. The talk will include the latest combination(s) of HH search channels.
The measurement of pair-production of Higgs bosons is one of the key goals of the LHC. Also, beyond the standard model theories involving extra spatial dimensions predict resonances with large branching fractions in a pair of Higgs bosons with negligible branching fractions to light fermions. We present an overview of searches for resonant and nonresonant Higgs boson pair production at high transverse momentum, using proton-proton collision data collected with the CMS detector at the CERN LHC. These results use novel analysis techniques to identify and reconstruct highly boosted final states that are created in these topologies.
In the Standard Model, the ground state of the Higgs field is not found at zero but instead corresponds to one of the degenerate solutions minimising the Higgs potential. In turn, this spontaneous electroweak symmetry breaking provides a mechanism for the mass generation of nearly all fundamental particles. The Standard Model makes a definite prediction for the Higgs boson self-coupling and thereby the shape of the Higgs potential. Experimentally, both can be probed through the production of Higgs boson pairs (HH), a rare process that presently receives a lot of attention at the LHC. In this talk, the latest HH searches by the ATLAS experiment are reported, with emphasis on the results obtained with the full LHC Run 2 dataset at 13 TeV. In the case of non-resonant HH searches, results are interpreted both in terms of sensitivity to the Standard Model and as limits on the Higgs boson self-coupling.
The most precise measurements of single and double Higgs boson production cross sections are obtained from a combination of measurements performed in different Higgs boson production and decay channels. While double Higgs production can be used to directly constrain the Higgs boson self-coupling, this parameter can be also constrained by exploiting higher-order electroweak corrections to single Higgs boson production. A combined measurement of both results yields the overall highest precision, and reduces model dependence by allowing for the simultaneous determination of the single Higgs boson couplings. Results for this combined measurement are presented based on pp collision data collected at a centre-of-mass energy of 13 TeV with the ATLAS detector.
Recent HL-LHC studies that were performed by CMS within Snowmass activities are presented. Updates cover different physics topics from Higgs and SM processes.
The large dataset of about 3 $\rm ab^{-1}$ that will be collected at the High Luminosity LHC (HL-LHC) will be used to measure Higgs boson processes in detail. Studies based on current analyses have been carried out to understand the expected precision and limitations of these measurements. The large dataset will also allow for better sensitivity to di-Higgs processes and the Higgs boson self coupling. This talk will present the prospects for Higgs and di-Higgs results with the ATLAS detector at the HL-LHC.
We study the Higgs boson decays h -> c cbar, b bbar, b sbar, photon photon
and gluon gluon in the Minimal Supersymmetric Standard Model (MSSM) with
general quark flavor violation (QFV), identifying the h with the Higgs boson
with a mass of 125 GeV. We compute the widths of the h decays to c cbar,
b bbar, b sbar (s bbar) at full one-loop level in the MSSM with QFV.
For the h decays to photon photon and gluon gluon we compute the widths
at NLO QCD level. We perform a systematic MSSM parameter scan respecting all
the relevant constraints, i.e. theoretical constraints from vacuum stability
conditions and experimental constraints, such as those from K- and B-meson
data and electroweak precision data, as well as limits on Supersymmetric
(SUSY) particle masses and the 125 GeV Higgs boson data from LHC experiments.
From the parameter scan, we find the followings:
(1) DEV(h -> c cbar) and DEV(h -> b bbar) can be very large simultaneously:
DEV(h -> c cbar) can be as large as about +/-60% and
DEV(h -> b bbar) can be as large as about +/-20%.
Here DEV(h -> X Y) is the deviation of the decay width Gamma(h -> X Y)
in the MSSM from the SM prediction:
DEV(h -> X Y) = Gamma(h -> X Y)_MSSM / Gamma(h -> X Y)_SM - 1.
(2) The QFV decay branching ratio BR(h -> b sbar / bbar s) can be as
large as about 0.2% in the MSSM. It is almost zero in the SM. The sensitivity
of ILC(250 + 500 + 1000) to this decay BR could be about 0.1% at 4 sigma signal
significance.
(3) DEV(h -> photon photon) and DEV(h -> gluon gluon) can be large
simultaneously: DEV(h -> photon photon) can be as large as about + 4% and
DEV(h -> gluon gluon) can be as large as about -15%.
(4) There is a very strong correlation between DEV(h -> photon photon)
and DEV(h -> gluon gluon). This correlation is due to the fact that the
stop-loop (stop-scharm mixture loop) contributions dominate the two DEVs.
(5) The deviation of the width ratio Gamma(h -> photon photon)/Gamma(h ->
gluon gluon) in the MSSM from the SM value can be as large as about +20%.
(6) All of these large deviations in the h decays are due to large
scharm-stop mixing and large stop/scharm involved trilinear couplings
T_U23, T_U32, T_U33 and large sstrange-sbottom mixing and large
sstrange/sbottom involved trilinear couplings T_D23, T_D32, T_D33.
(7) Future lepton colliders such as ILC, CLIC, CEPC and FCC-ee can
observe such large deviations from SM at high signal significance.
(8) In case the deviation pattern shown here is really observed at
the lepton colliders, then this would strongly suggest the discovery
of QFV SUSY (MSSM with QFV).
This work is the update of the papers shown below and contains many new findings.
Phys. Rev. D 91 (2015) 015007 [arXiv:1411.2840 [hep-ph]]
JHEP 1606 (2016) 143 [arXiv:1604.02366 [hep-ph]]
IJMP A34 (2019) 1950120 [arXiv:1812.08010 [hep-ph]]
PoS(EPS-HEP2021)594, 2021 [arXiv:2111.02713 [hep-ph]].
In the absence of direct observations of physics beyond the Standard Model (BSM) at the LHC, the interpretation of Standard Model measurements in the framework of an Effective Field Theory (EFT) represents the most powerful tool to identify BSM phenomena in tiny deviations from the measurement from the SM predictions, to interpret them in term of generic new interactions, or to place model-independent constraints on new physics scenarios. This talk presents various EFT interpretations of individual and combined measurements in the Higgs sector by the ATLAS experiment.
The Deep Underground Neutrino Experiment (DUNE), a next-generation long-baseline neutrino oscillation experiment, is a powerful tool to perform low energy physics searches. DUNE will be uniquely sensitive to the electron-neutrino-flavour component of the burst of neutrinos expected from the next Galactic core-collapse supernova, and also capable of detecting solar neutrinos. DUNE will have four modules of 70-kton liquid argon mass in total, placed 1.5 km underground at the Sanford Underground Research Facility in the USA. These modules are being designed exploiting different liquid argon time projection chamber technologies and based on the physics requirements that take into account the particularities of the low energy physics searches.
Supernova (SN) explosions are the most powerful cosmic factories of all-flavors, MeV-scale, neutrinos. The presence of a sharp time structure during a first emission phase, the so-called neutronization burst in the electron neutrino flavor time distribution, makes this channel a very powerful one. Large liquid argon underground detectors, like the future Deep Underground Neutrino Experiment (DUNE), will provide precision measurements of the time dependence of the electron neutrino fluxes.โจ In this contribution, I derive a new neutrino mass sensitivity attainable at the future DUNE far detector, obtained by measuring the time-of-flight delay in the SN neutrino signal from a future SN collapse in our galactic neighborhood. Comparison of sensitivities achieved from the two neutrino mass orderings is discussed, as well as the effects due to propagation in the Earth matter.
The ProtoDUNE single phase detector (ProtoDUNE-SP) is a prototype liquid argon time projection chamber (LArTPC) for the first far detector module of the Deep Underground Neutrino Experiment (DUNE). ProtoDUNE-SP is installed at the CERN Neutrino Platform. Between October 10 and November 11, 2018, ProtoDUNE-SP recorded approximately 4 million events in a beam that delivers charged pions, kaons, protons, muons and electrons with momenta in the range 0.3 GeV/c to 7 GeV/c. After the beam runs ended, ProtoDUNE-SP continued to collect cosmic ray and calibration data until July, 2020. In this talk, we will review the results from analyzing the beam and cosmic ray data, including detector calibration, hadron-argon cross section measurements and seasonal variation of cosmic ray muon rate.
The Deep Underground Neutrino Experiment (DUNE) is part of the next generation of neutrino oscillation experiments that seek to definitively answer key questions in the field. It will utilize four 17-kt modules of Liquid Argon Time Projection Chambers (LArTPCs) enabling mm spatial resolutions for unprecedented sensitivity to neutrino oscillation paramters as well as for studies related to proton decay and supernova neutrinos. For this purpose, a newly proposed Vertical Drift (VD) configuration is being planned for the second DUNE module, in contrast to a Horizontal Drift (HD) configuration for the first module. The VD detector involves a suspended cathode dividing the TPC into two drift volumes oriented vertically above and below the cathode and is situated in an electric field of 500 V/cm. Unlike in the HD design where a multi-wire plane readout was employed, the anodes here consist of a grid of double-sided perforated PCBs. As electrons pass through the perforations, charge is induced and collected at parallel strips etched on different layers of the PCBs and oriented in multiple configurations for each layer. As part of prototyping designs for such a detector, a coldbox demonstrator housed in the NP04 platform at CERN is collecting cosmic data. The prototypes will seek to ensure favorable readout conditions as well as test different designs for the PCBs and strip orientations. In parallel, simulation studies are underway for the Far Detector module to assess various performance metrics related to selection and reconstruction efficiency. In this talk, I shall provide an overview of these efforts with a emphasis on the analysis of cosmic data from the coldbox demonstrators and its comparison with the simulation as well as the development of a deep learning-based neutrino flavor tagger in order to maximize sensitivity towards the oscillation measurements and help DUNE achieve its primary physics goals.
Neutrino oscillations in matter offer a novel path to investigate new physics. The most recent data from the two long-baseline accelerator experiments, NO$\nu$A and T2K, show discrepancy in the standard 3-flavor scenario. Along the same line of discussion, we intend to explore the next generation of long-baseline experiments: T2HK and DUNE. We investigate the sensitivities of relevant NSI couplings ($|\epsilon_{e \mu}|$, $|\epsilon_{e \tau}|$) and the corresponding CP-phases ($\phi_{e \mu}$ and $\phi_{e \tau}$). While both the experiments are sensitive to non-standard interactions (NSI) of the flavor changing type arising from $e-\mu$ and $e-\tau$ sectors, we show that DUNE is more sensitive to these NSI parameters when compared to that of T2HK. At the same time, we aim to explore the impact of non-standard neutrino interaction on the sensitivity of standard CP-phase $\delta_{CP}$ and atmospheric mixing angle $\theta_{23}$ in the normal as well as inverted hierarchies. Our analysis also exhibits the difference in probabilities for both the experiments with inclusion of NSI.
The experimental observation of the phenomena of neutrino oscillations was the first clear hint of physics beyond the Standard Model (SM). The SM needs an extension to incorporate the neutrino masses and mixing often called as beyond SM (BSM). The models describing BSM physics usually comes with some additional unknown couplings of neutrinos termed as Non Standard Interactions (NSIs) [1]. The idea of NSI was initially proposed by Wolfenstein [2], where he explored how non standard coupling of neutrinos with a vector field can give rise to matter effect in neutrino oscillations. Furthermore, there is also an intriguing prospect of neutrinos coupling with a scalar field, called scalar NSI [3, 4]. The effect of this type of scalar NSI appears as a medium dependent correction to the neutrino masses, instead of appearing as a matter potential. Hence scalar NSI may offer unique phenomenology in neutrino oscillations.
In this work, we have performed a synergy study of the effects of scalar NSI at various proposed Long Baseline (LBL) Experiments, viz. DUNE [5], T2HK [6] and T2HKK [7]. As the effect of scalar NSI scales linearly with environmental matter density, it can experience the matter density variations which makes LBL experiments one of the suitable candidate to probe its effects. We found that the effect of scalar NSI on the oscillation probabilities of LBL experiments is notable. In addition, scalar NSI can significantly effect the CP violation sensitivities as well as ฮธ 23 octant sensitivities of these LBL experiments. Finally, we have also performed a combined sensitivity of these experiments towards constraining these scalar NSI parameters.
References
[1] O. G. Miranda and H. Nunokawa, Non standard neutrino interactions: current status and future prospects, New Journal of Physics 17 (2015) 095002.
[2] L. Wolfenstein, Neutrino Oscillations in Matter, Phys. Rev. D 17 (1978) 2369.
[3] S.-F. Ge and S. J. Parke, Scalar Nonstandard Interactions in Neutrino Oscillation, Phys. Rev. Lett. 122 (2019) 211801 [1812.08376].
[4] K. Babu, G. Chauhan and P. Bhupal Dev, Neutrino nonstandard interactions via light scalars in the Earth, Sun, supernovae, and the early Universe, Phys. Rev. D 101 (2020) 095029 [1912.13488].
[5] DUNE collaboration, Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume IV Far Detector Single-phase Technology, JINST 15 (2020) T08010 [2002.03010].
[6] Hyper-Kamiokande Proto- collaboration, Physics potential of a long-baseline neutrino oscillation experiment using a J-PARC neutrino beam and Hyper-Kamiokande, PTEP 2015 (2015) 053C02 [1502.05199].
[7] Hyper-Kamiokande collaboration, Physics potentials with the second Hyper-Kamiokande detector in Korea, PTEP 2018 (2018) 063C01 [1611.06118].
The measurement of the matter/antimatter asymmetry in the leptonic sector is one of the highest priority of the particle physics community in the next decades. The ESSnuSB collaboration proposes to design a long baseline experiment based on the European Spallation Source (ESS) at Lund in Sweden. This experiment will be able to measure the Delta_CP parameter with an unprecedent sensitivity thanks to a very intense neutrino superbeam and to the observation of the nu_mu to nu_e oscillation at the second oscillation maximum. To reach this goal, the ESS facility will be upgraded to provide an additional 5 MW proton beam by doubling the LINAC pulse frequency from 14 Hz to 28 Hz. The pulse time width will be reduced thanks to an accumulator ring from 2.86 ms to 1.3 microseconds and shared in four parts by a beam switchyard before entering into the target station. The produced neutrino superbeam will be sent to a large 538kt fiducial mass Far Detector based on Water Cherenkov technology.
In this talk, a global overview of the project with its physics potentials will be reviewed and additional possibilities offered by this high intensity facility for complementary R&D activities will also be discussed.
The nuSTORM facility will provide $\nu_e$ and $\mu_\mu$ beams from the decay of low energy muons confined within a storage ring. The central momentum of the muon beam is variable, while the momentum spread is limited. The resulting neutrino and anti-neutrino energy spectra can be precisely calculated from the muon beam parameters, and since the decay of the captured muons is well separated in time from that of their parent pions, wrong flavour neutrino backgrounds can be eliminated. nuSTORM can contribute to this effort by providing the ultimate experimental program of scattering measurements. The cross section for the scattering on complex nuclei is sensitive to energy and momentum transfers. Data with both muons and electrons in the final state are therefore very valuable. Sensitivity to physics beyond the Standard Model (BSM) is provided by nuSTORMโs unique features. This allows sensitive searches for short-baseline flavour transitions, light sterile neutrinos, nonstandard interactions, and non-unitarity. In synergy with the scattering program, new physics searches would also profit from measurements of exclusive final states, allowing for BSM neutrino interactions to be probed in neutrino-electron scattering and by searching for exotic final states. The status of the development of nuSTORM will be reviewed in the context of the renewed effort to develop high-brightness stored muon beams and as a route to very-high energy lepton-anti lepton collisions in the muon collider.
LHCb has collected the world's largest sample of charmed hadrons. This sample is used to measure $D^0 -\overline{D}^0$ mixing and to search for $C\!P$ violation in mixing and interference. New measurements from several decay modes are presented, as well as prospects for future sensitivities.
CP violation in charm meson decays is expected to be small, and an observation of a large CP asymmetry could indicate new physics. We report measurement of branching fractions and CP asymmetries in $D \to K h \pi \pi^{0}$ $(h = \pi, K)$ and T-odd asymmetry, sensitive to CP violation, in the $D^{0} \to K^{0}_{S}K^{0}_{S}\pi^{+}\pi^{-}$ decay, and $D_{(s)}\to KK^{0}_{S}\pi\pi$. The talk also covers other results on charm meson decays. The results are based on the full data collected with the Belle detector at the KEKB asymmetric-energy $e^{+}e^{-}$ collider.
LHCb has collected the world's largest sample of charmed hadrons. This sample is used to measure direct $C\!P$ violation in $D$ mesons and charmed baryons. New measurements from several decay modes are presented, as well as prospects for future sensitivities.
LHCb is playing a crucial role in the study of rare and forbidden decays of charm hadrons, which might reveal effects beyond the Standard Model. We present the latest searches for, and measurements using, rare charm decay processes with two leptons in the final state.
BESIII has collected 2.93 and 6.32 $fb^{-1}$ of $e^+e^-$ collision data samples at 3.773 and at 4.178-4.226 GeV, respectively. We will report the observation of $D^0 -> \omega \phi$ and the
transverse polarization determination, the observation of $D^0 -> K_LX (X=\eta, \eta',
\omega$, and $\phi$) and $K_S/K_L$ asymmetry measurements. Also, amplitude analyses of $D_s ->
K^+\pi^+\pi^-, K_SK^+\pi^0, K_SK_S\pi^+, \pi^+\pi^0\eta', \pi^+\pi^0\pi^0$, and $\pi^+\pi^+\pi^-$ will be reported.
In addition, the direct measurements of the absolute branching fractions of $D^{0(+)} ->
K\pi\omega, D^{0(+)} -> K3\pi$ and $K4\pi$ will also presented.
The observed matter-antimatter asymmetry in the universe composes a serious
challenge to our understanding of nature. BNV/LNV decays have been searched in many
experiments to understand this large-scale observed fact, and few in the case of
$e^+e^-$ collision experiments are performed. In this talk, we present recent
results to search for BNV and LNV from $J/\psi$, $D^+$, $D^0$ and $\Sigma^+$ decays
at the BESIII experiment.
Recent high precision determinations of $V_{us}$ and $V_{ud}$ indicate towards anomalies in the first row of the CKM matrix. Namely, the determination of $V_{ud}$ from beta decays and of $V_{us}$ from kaon decays imply a violation of first row unitarity at about $3\sigma$ level. Moreover, there is tension between determinations of $V_{us}$ obtained from leptonic $K\mu 2$ and semileptonic $K\ell 3$ kaon decays. These discrepancies can be explained if there exist extra vector-like quarks at the TeV scale, which have large enough mixings with the lighter quarks. However, only one type of extra multiplet cannot entirely explain all the discrepancies, and some their combination is required, e.g. two species of isodoublet, or one isodoublet and one (up or down type) isosinglet. These scenarios are testable with future experiments. A different solution can come from the introduction of the gauge horizontal family symmetry acting between the lepton families and spontaneously broken at the scale of about 6 TeV. Since the gauge bosons of this symmetry contribute to muon decay in interference with Standard Model, the Fermi constant is slightly smaller than the muon decay constant so that unitarity is recovered.
The study of the associated production of vector bosons and jets constitutes an excellent environment to check numerous QCD predictions. Total and differential cross sections of vector bosons produced in association with jets have been studied in pp collisions using CMS data. Differential distributions as a function of a broad range of kinematical observables are measured and compared with theoretical predictions. In this talk, studies of associated production of vector bosons with inclusive jets and with jets originating from heavy-flavour quarks will be summarized.
The production of W/Z bosons in association with heavy flavor jets or hadrons at the LHC is sensitive to the heavy flavor content of the proton and provides an important test of perturbative QCD. We present the production of Z bosons in association with b-tagged large radius jets. The result highlights issues with modeling of additional hadronic activity and provides distinction between flavor-number schemes used in theoretical predictions. Measurements are compared to the state-of-the art NLO theoretical calculations.
Understanding leading non-perturbative corrections, showing up as linear power corrections, is crucial to properly describe observables both at lepton and hadron colliders.
Using an abelian model, we examine these effects for the transverse momentum distribution of a $Z$ boson produced in association with a jet in hadronic collisions, that is one of the cleanest LHC observables, where the presence of leading non-perturbative corrections would spoil the chance to reach the current experimental accuracy, even considering higher orders in the perturbative expansion.
As we did not find any such corrections exploiting numeric techniques, we looked for a rigorous field-theoretical derivation of them, and explain under which circumstances linear power corrections can arise.
We apply our theoretical understanding to the study of event-shape observables in $e^+e^-$ annihilation, focusing in particular on $C$-parameter and thrust, and obtaining for them an estimate of non-perturbative corrections in the three-jet region for the first time.
These observables are routinely used to extract the strong coupling constant of $\alpha_s$ and they constitute an environment to test perturbative QCD.
It is then extremely important to obtain reliable estimates of non-perturbative corrections in the whole kinematic region relevant for the $\alpha_s$ fits.
We present high-accuracy QCD predictions for the transverse-momentum (qT) distribution and fiducial cross sections of Drell-Yan lepton pairs produced in hadronic collisions. At small values of qT we resum to all perturbative orders the logarithmically enhanced contributions up to next-to-next-to-next-to-leading logarithmic (N3LL) accuracy, including all the next-to-next-to-next-to-leading order (N3LO) terms. Our resummed calculation has been implemented in the public numerical program DYTurbo, which produces fast and precise predictions with the full dependence on the final-state lepton kinematics. We consistently combine our resummed results with the known O(aS^3) fixed-order predictions at large values of qT obtaining full N3LO accuracy for fiducial cross sections. We show numerical results for Z and W production at LHC energies discussing the reduction of the perturbative uncertainty with respect to lower-order calculations. We comment on the effect of such high precision QCD predictions on the W boson mass measurement.
The strong force is the least known fundamental force of nature, and the effort of precisely measuring its coupling constant has a long history of at least 30 years. This contribution presents a new experimental method for determining the strong-coupling constant from the Sudakov region of the transverse-momentum distribution of Z bosons produced in hadron collisions through the Drell-Yan process. The analysis is based on predictions at third order in perturbative QCD, and employs a measurement performed in proton-proton collisions with the CDF experiment. The determined value of the strong coupling at the reference scale corresponding to the $Z$-boson mass is $\alpha_S(m_Z) = 0.1185^{+0.0014}_{-0.0015}$. This is the most precise determination achieved so far at a hadron collider. The application of this methodology at the LHC has the potential to reach sub-percent precision.
We will present results for a new, high precision, extraction of the strong coupling, $\alpha_s$, at the tau mass scale based on a more precise, non-strange, inclusive vector isovector spectral function. The new spectral function is obtained from a combination of (i) ALEPH and OPAL results for the $2\pi$ pion and $4\pi$ pion tau decay channels, (ii) recent BaBar results for the $\tau \to K^- K^0 \nu_\tau$ decay distribution, and (iii) subleading contributions from other hadronic tau decay modes obtained, using CVC, from recent electroproduction data. This new inclusive spectral function has smaller uncertainties and is entirely data-based, with no need for Monte Carlo estimates for the contribution of any exclusive mode.
I will present NNLO QCD calculation for Wbb production at the LHC. The computation of two-loop scattering amplitude using finite-field framework will be discussed and phenomenological results at the LHC sqrt{s}=8 TeV will be shown. The use of different flavoured jet algorithms will be explored and the comparison with CMS data will be presented.
In 2018, the European Commission (EC)โs Horizon 2020 Programme funded ATTRACT phase 1, which supported 170 breakthrough technology concepts in the domain of detection and imaging technologies across Europe. The projects were each granted โฌ100,000 in seed funding to create a proof-of-concept. ATTRACT co-innovation approach seeks to act as a bridge between two communities โ research and industry โ with apparently different motivations and goals for undertaking research and development and innovation (R&D&I). The ATTRACT Consortium uses public funding to lower the intrinsic risk that breakthrough technology bears as it moves along technology readiness levels (TRLs) and reaches private investment and the market. Lowering risk is achieved in two phases:
Risk absorption (ATTRACT phase 1): ~TRLs 1 to 4.
Risk reduction (ATTRACT phase 2): ~TRLs 4 to 7.
After reaching a stage around TRL 7, breakthrough technologies โ thanks to public funding โ will have been sufficiently de-risked to become more attractive to private funders. At this point, ATTRACT phase 2 will be completed, and private investment will help to commercialize new products and services for society. The ATTRACT project is now in its Phase 2. In this talk, some of the lessons learnt will be analyzed which might prove insightful for understanding the process of managing R&D&I Innovation Ecosystems
We present a design project for a muon tomography detector aiming to the monitoring of glacier monitoring. The glacier melting process is not completely understood and is considered an hot topic in lieu of the global warming.
Muon Tomography is a widely used technique, employed to perform imaging of the inner structure of large objects, as volcanoes, container and pyramids. This technique takes advantages of the muon flux that reaches the surface of the Earth (~ 70 m-2 s-1 sr-1), produced by the interaction between the primary Cosmic Rays and the atmosphere. The difference between the measured muon flux, with and without a certain object in the field of view, allows to infer the thickness of material (in equivalent water meter) that the muons cross. In case of glaciers, thanks to the different density of ice and rock, a directional flux measurement provides information on both the glacier thickness and the bedrock -ice interface depth.
The goal of our project is the development of a detector able to measure the glacier thickness with short exposure time, and with a real time data taking and processing, in order to perform studies of the seasonal behavior, and the glacier melting trend through the years.
The detector will be able to reconstruct the trajectory of muons with an angular resolution of order of 5 milliradians to obtain a precision on the target object thickness of the order of few meters. The detector will also be operable in open-sky and be replicable. To fulfill all these requirements, the detector is built of 5 sensitive modules, each of that composed by two layers of bundles of scintillating fibers running along orthogonal directions with respect to each other. Each bundle is coupled with a photodetector, which detects scintillation light produced in the bundle. Thanks to the information of the fired bundles, we can reconstruct the coordinates of the muon hit, and reconstruct each muon trajectory. To work in open sky, the detector readout speed is paramount, in order to reconstruct each muon crossing it individually, including the muons not passing through the target of interest. In order to discriminate the signal (muons coming through the mountains) and the background (the one not passing through the target) the read-out system need to be fast enough to limit to a minimum spurious coincidences, and overlapping events. Given the active surface of the designed detector and the expected muon flux, a sampling of few kHz would be able to cope with the background without limiting the signal efficiency.
In this contribution, we will show the results of a set of simulations aimed to optimize the detector design, and the foreseen performances of the designed detector. The results are obtained through a detector simulation and a track finding algorithm. The angular resolution of the reconstructed muon tracks, will be shown considering different configuration of the triggering system, and the quality of the tracks, together with a study of the dependence of the angular resolution with respect to the direction of the incoming particle.
At present the detector simulations show an expected angular resolution of ~0.7 mrad, by which we can infer the expected resolution on the measurement on the ice-rock interface depth of ฮx ~ ฮฮธ L = 14 m (where L is the distance between the detector and the glacier, that we consider around 2 Km).
We will present also the first full simulation results on the expected resolution on the measured thickness of the target, and the resolution on the depth of the ice-rock interface in a glacier monitoring environment, along with the foreseen exposure time needed for the on-field measurement.
The Jagiellonian Positron Emission Tomograph (J-PET) is a detector for: 1. medical imaging by combining metabolic information collected by standard PET with structural information obtained from Positronium lifetime in a concept of morphometric image, 2. tests of discrete symmetries, 3. and even test of quantum entanglement of photons originating from the decay of positronium atoms. The novelty of the system is based on usage of plastic scintillators for active detection material and trigger-less data acquisition system. The apparatus consists of 192 plastic scintillators read out from both ends with vacuum tube photomultipliers. Signals produced by photomultipliers are probed at four levels in the amplitude domain and digitized on 8 FPGA based readout boards in trigger-less mode. The recently presented concept of positronium imaging has the potential to increase the diagnostic efficiency of positron emission tomography (PET) based on the use of an additional indicator derived from the mean lifetime of one of the metastable positron and electron bound states - ortho-positronium (o-Ps). In this talk we will present the world's first in-vitro positronium images of human tissues from the J-PET detector, which allow to distinguish cardiac myxoma tissues from normal pericardial tissues based on the measurement of the mean o-Ps lifetime separately in each image voxel.
Silicon photomultipliers (SiPM) are solid-state photodetector consisting in arrays of hundreds to thousands of Single Photon Avalanche Diodes (SPADs) per mm$^2$. They feature a photon detection efficiency in excess of 40% at the peak sensitivity wavelength and guarantee an unprecedented photon number resolution at room temperature. These properties, along with low operation voltage, compactness, and robustness, make SiPMs excellent devices for light detection from single to several thousand of photons, especially when fastest timing is required.
Beyond High Energy and Nuclear Physics, SIPMs are employed in nuclear medical imaging, quantum optics, functional optical spectroscopy and biophysics, where SIPMs are exploited to detect fluorescence and chemiluminescence light.
In this paper, the potential of this class of sensors was exploited in a novel application: the measurement of calcium concentration gradients in living cells. Calcium ion plays a crucial role in several biological processes (e.g., muscle contraction, neurotransmission, cell signal transduction pathways...) and the measurement of spatial and temporal variations of the concentration of this ion is essential to understand, and eventually tune by means of suitable drugs, the underlying mechanisms behind such processes.
A method to measure calcium concentration is based on the quantification of the chemiluminescence generated by aequorin, a calcium-sensitive photoprotein. Upon binding to calcium ions, it generate signals consisting in a sequence of single photons. This method was so far exploited using custom designed systems based on Photo Multiplier Tubes (PMT), limited in portability, flexibility and cost effectiveness, motivating a study based on SiPM. As a proof of concept, a 6x6 mm$^2$ SiPM-based setup was fully qualified in terms of dynamic range, response linearity, sensitivity, and limit of detection, before being successfully applied in live cell measurements, with a limit of detection greater than what was previously measured with PMTs. Two read out techniques, integrating the produced charge or counting single photons, were compared, showing that the latter approach results into a better sensitivity, even if non-linearity effects due to pile-up were observed at rates beyond a few MHz [1].
The project is now moving forward, and a new multi-sensor setup equipped with a 10x10 mm$^2$ square matrix of 64 independent 1x1 mm$^2$ SiPMs is currently under test. This parallelized configuration extends the dynamic range in counting mode by more than 30 times and introduces the possibility of spatial discrimination, paving the way for aequorin-based calcium imaging. Moreover, the new platform incorporates a perfusion system allowing extracellular medium substitution and drug administration, and the whole setup is placed in a compact box to protect the detector from environmental light and to allow measurements on site. In this respect the sensor is operated at stable temperature by using an active cooling system based on Peltier cell. This will also allow us to contain the dark count rate.
Besides the standard instrumental qualification, calibration curves for the absolute measurement of calcium concentration were obtained exploiting the above-described setup, while more complex experiments involving the spatial variable are still ongoing, with the aim of approaching calcium imaging by means of suitable optics.
In future, the feasibility of exploiting SiPMs matrices-based setup to discriminate and give a quantitative estimation of water contaminants triggering cell calcium gradients, as the estrogenic endocrine-disrupting chemicals (EEDCs), will be explored. EEDCs are associated with breast and
prostate cancer, affect reproduction of humans, domestic and wild animals, and the standard techniques (as liquid or gas chromatography combined with mass spectrometry) to measure EDCs are usually expensive and they cannot be run in site. In this respect, a SiPMs based assay could provide a cheaper and on-site pre-screening of the contaminants actually present in a water sample leading to a more sensitive analysis by standard analytical methods only for the detected contaminants.
[1] F. Ruffinatti, S. Lomazzi, L. Nardo, R. Santoro, A. Martemiyanov, M. Dionisi, L. Tapella, A. Genazzani, D. Lim, C. Distasi and M. Caccia, โAssessment of a Silicon-Photomultiplier-Based Platform for the Measurement of Intracellular Calcium Dynamics with Targeted Aequorin,โ ACS Sens. 2020, no. 5, pp. 2388-2397, 2020.
Nb3Sn superconducting radiofrequency (SRF) cavities have the potential to expand new performance capabilities of particle accelerators for the benefit of both the fundamental science and the industrial applications, where potential applications among others include wastewater treatment and medical isotope production. For small-scale applications, Nb3Sn SRF creates the opportunity for a turn-key cryocooler operation instead of complex sub-atmospheric liquid helium cryogenic plants. The transition from cryogenic plant to a cryocooler operation reduces the footprint of the system, and substantially simplifies its operation and maintenance. With continued progress in the material development, Nb3Sn cavities have the potential to further reduce cryogenic losses and to eventually outperform current state-of-the-art niobium in energy gain by a significant margin. Small scale accelerators based on Nb3Sn are now moving towards the prototyping stage. We will discuss ongoing efforts towards the demonstration of Nb3Sn cryomodules.
Silicon PhotoMultipliers (SiPM) are rapidly approaching a significant maturity stage, making them a well recognised platform for the development of evolutionary and novel solutions in a wide range of applications for research and industry. However, they are still affected by stochastic terms, notably a significant Dark Count Rate (DCR) at the level of 50 kHz/mm^2 at room temperature, limiting their use when single photo-electron pulses convey the required information, for instance in chemiluminescence or fluorescence analysis of biological samples or dosimetry by counting. In such applications, randomness of the spontaneous generation of carriers triggering the avalanche and the rate of occurrences is significantly decreasing the sensitivity of the system against solutions based, for instance, on traditional photo-multiplier tubes.
However, unpredictability of the โdarkโ pulses has a potential value in domains connected to encryption and, in general terms, cybersecurity. โRandom Powerโ is a project approved within the ATTRACT call for proposals (https://attract-eu.com), having as a main goal the generation of random bit streams by properly analysing the time sequence of the Dark Pulses. The patent protected principle has been proven using laboratory equipment and its value assessed applying the National Institute of Standard and Technology (NIST) protocols, complemented by other test suites. By the time of writing and thanks to the support by ATTRACT in its "Phase I", a credit-card size board has been designed, produced and qualified as a real "minimum viable product". Now, the project is entering a new stage, thanks to the approval of the "Phase II" ATTRACT project, aiming to the scale-up of the platform to a multi-generator board for data centres and the miniaturisation into a dedicated ASIC, embedding both a Single Photon Avalanche Diode (SPAD) array and the functionalities required to extract the bit stream. The consortium project comprises six industries at European level, both corporates and small enterprise, and three research institutions.
The state of the project and the workplan will be described, together with the results obtained so far and the view to the market.
Superconducting radio frequency (SRF) cavities are the core technology for particle acceleration in modern accelerators, due to their extremely high quality factors as high as Q > 10^11. These make possible the continuous wave (CW) sustainment of very high electromagnetic fields inside the cavities with minimal dissipation in the cavity walls.
A few years ago, it was realized that the extreme high Q factors of SRF cavities can bring multiple orders of magnitude improvements in the coherence of the quantum computing building blocks, as well as dramatic advances in the sensitivities of dark sector and dark matter searches. These technologies provided the unique foundation of the Fermilab-led Superconducting Quantum Materials and Systems Center (SQMS), one of the five national QIS Research Centers, which includes more than 280 researchers in the national labs, industry, and academic institutions, working towards the focused mission of building the world-leading 3D quantum processor unit based on SRF technology, as well as physics and sensing advances for several open fundamental physics questions.
In this talk I will review these emerging applications of SRF technology, including the record-breaking coherence levels achieved in 3D SRF Quantum Systems, and the highest sensitivity searches for dark photons and axions realized and planned in the context of SQMS effort at Fermilab and partners.
Next generation high energy physics experiment will be more granular than current ones, this means more demanding electronics to power the detectors and to process all collected data. Space constrains, cabling, cooling and, last but not least, efficiency are all parameters that need to be optimized during experiment design to have the best performance for data taking.
We will present some result from an R&D lunched by CAEN in 2020 to develop the next generation of power supplies for the experimental caverns of HL-LHC, in this hostile environment the electronics needs to survive to magnetic fields and a mixed radiation field (composed by gamma, neutral and charge hadrons). The composition of the radiation field, as well as the intensity and direction of the magnetic field, can vary by orders of magnitude between experiments but also within the same cavern. To cover all needs, CAEN started an irradiation campaign in various steps and at various irradiation facilities, we wanted to investigate COTS electronics behavior using one radiation per time: neutron, gamma, protons, and then validate the final board design with mixed field. Plus, we performed efficiency tests in magnetic fields up to 1 T using various orientation to exploit different symmetry in the boards design.
During this talk we will focus on last year test campaigns, some undertaken within the RADNEXT EU project and in collaboration in INFN and CERN, that include tests with proton, neutron, and gamma sources, of various components: ADCs, DACs, RAMs, FPGAs, ฮผControllers, Power Transistors and FETs, temperature and humidity sensors, etc. All the necessary pieces to the design and build circuits and boards capable to survive in the experiments; these components alone cannot ensure the reliability to run an experiment in such conditions so also the circuits and control loops must be tested. The results of the test campaigns will be discussed together with some mitigation techniques used to achieve the wanted reliability.
First developments of power supply circuits and devices based on these blocks will be also presented, as well as the performances achieved so far in terms of reliability, power density, energy efficiency, noise figure, etc.
Vector boson scattering is a key production process to probe the electroweak symmetry breaking of the standard model, since it involves both self-couplings of vector bosons and coupling with the Higgs boson. If the Higgs mechanism is not the sole source of electroweak symmetry breaking, the scattering amplitude deviates from the standard model prediction at high scattering energy. Moreover, deviations may be detectable even if a new physics scale is higher than the reach of direct searches. Latest measurements of production cross sections of vector boson pairs in association with two jets in proton-proton collisions at sqrt(s) = 13 TeV at the LHC are reported using a data set recorded by the CMS detector. Differential fiducial cross sections as functions of several quantities are also measured.
Measurements of multiboson production at the LHC are important probes of the electroweak gauge structure of the Standard Model. for contributions from anomalous couplings. In this talk we present recent ATLAS results on Zy production in association with jet activity. These differential measurements provide inputs and constraints on modeling of the Standard Model. Using di-boson processes to probe the boson polarization will also be discussed. Moreover, precise boson, diboson and Higgs differential cross-section measurements are interpreted in a combined Effective Field Theory analysis, allowing to systematically probe gauge boson self-interactions.
This talk reviews recent measurements of multiboson production using CMS data. Inclusive and differential cross sections are measured using several kinematic observables.
Vector boson scattering (VBS) plays a central role in the search for new physics at collider experiments such as ATLAS and CMS at the LHC. Usually predictions for this kind of process are obtained using mainly perturbative approaches in fixed gauges.
In our work we investigate VBS in a manifestly fully gauge-invariant setup. To analyse the differences to gauge-fixed perturbation theory we are using lattice techniques as a non-perturbative tool as well as perturbative results obtained from a reunitarized Frรถhlich-Morchio-Strocchi analysis at Born level.
Our findings show that, in a reduced SM setup, the scattering length at threshold becomes negative. This strongly indicates a non-trivial structure of the physical scalar degree of freedom. Additionally, we also analyse the impact on (differential) cross sections of this process, paving the way for an experimental detection of this yet unaccounted-for SM effect.
We present a parton-level study of electro-weak production of vector-boson pairs at the Large Hadron Collider, establishing the sensitivity to a set of dimension-six operators in the Standard Model Effective Field Theory (SMEFT). Different final states are statistically combined, and we discuss how the orthogonality and interdependence of different analyses must be considered to obtain the most stringent constraints. The main novelties of our study are the inclusion of SMEFT effects in non-resonant diagrams and in irreducible QCD backgrounds, and an exhaustive template analysis of optimal observables for each operator and process considered. We also assess for the first time the sensitivity of vector-boson-scattering searches in semileptonic final states.
Top quark production can probe physics beyond the SM in different ways. The Effective Field Theory (EFT) framework allows searching for beyond the SM effects in a model independent way. The CMS experiment is pioneering EFT measurements that move towards using the full potential of the data in the most global way possible.
We perform a complete study of four top quarks production at the LHC in the context of the Standard Model Effective Field Theory (SMEFT). Our analysis is conducted at the tree-level yet investigating all possible QCD- and EW-couplings orders for the contributing dimension-six SMEFT operators. We observe that the formally dominating contributions do not necessarily provide the largest cross-sections and we investigate why. Inclusive and differential predictions are presented for the LHC and the future 100TeV collider scenario. Finally, we carry out a projection study through which we set limits on SMEFT Wilson coefficients at different collider energies for all the relevant SMEFT operators.
The ongoing U.S. Particle Physics Community Planning Exercise, โSnowmass 2021โ, which is organized around discussions spanning ten scientific frontiers, will soon come to an end. This process will provide a scientific vision document for the future of the U.S. high energy physics (HEP) program and aims to define the most important questions for the field as well as to identify promising roadmaps to address them. After the Snowmass process concludes, the Particle Physics Project Prioritisation Panel (P5) will develop a 10-year plan for US particle physics to address the most compelling scientific opportunities.
Accelerators able to collide high energy and high intensity particle beams are the most promising tools to understand and measure the heaviest particles of the Standard Model (SM). They also enable exploration of the physics beyond the SM to discover new particles and interactions, including unraveling the mystery of dark matter. For the past 50 years Particle Colliders have been at the forefront of scientific discoveries in HEP.
Several multi-TeV collider concepts were considered during this two-year process. A range of issues were discussed, including: the physics reach, the level of maturity of the facility concepts, the potential machine routes, timelines, R&D requirements, and common issues for these very high energy machines such as energy efficiency and cost. We will discuss and compare these concepts on the basis of their physics potential. This includes various possible future accelerator scenarios, such as lepton-lepton, hadron-hadron, and lepton-hadron colliders. Synergies between the facilities and the technology R&D required to validate the designs will be addressed along with the potential timelines to deliver next-generation colliders that can operate in the 1-100 TeV center-of-mass energy range (or beyond). The aim is to explore possible strategies towards a next generation multi-TeV collider to play a crucial role in future discoveries at the energy frontier.
Circular muon colliders offer the prospect of colliding lepton beams at unprecedented center-of-mass energies. The continuous decay of stored muons poses, however, a significant technological challenge for the collider and detector design. The secondary radiation fields induced by decay electrons and positrons can strongly impede the detector performance and can limit the lifetime of detector components. Muon colliders therefore require an elaborate interaction region design, which integrates a custom detector shielding together with the detector envelope and the final focus system. In this paper, we present design studies for the machine-detector interface and we quantify the resulting beam-induced background for different center-of-mass energies. Starting from the optics and shielding design developed by the MAP collaboration for 3 TeV, we devise an initial interaction region layout for the 10 TeV collider. In particular, we explore the impact of lattice and shielding design choices on the distribution of secondary particles entering the detector. The obtained results serve as crucial input for detector performance and radiation damage studies.
The European Laboratory Directors Group (LDG) was mandated by CERN Council in 2021 to oversee the development of an Accelerator R&D High Energy Physics Accelerator Roadmap. To this end, a set of expert panels was convened, covering the five broad areas of accelerator R&D highlighted in the ESPPU, drawing upon the international accelerator physics community for their membership, and tasked to consult widely and deeply.
High Field Magnets (HFM) is one of these R&D axes and is among the key technologies that will enable the search for new physics at the energy frontier. Approved projects (HL-LHC) and potential future circular machines such as proton-proton Future Circular Collider (FCC-hh) and Super proton- proton Collider (SppC) require the development of superconducting (SC) magnets that produce fields beyond those attained in the LHC.
The present state of the art in HFM is based on Nb3Sn, with magnets producing fields in the range of 11 T to 14 T. We have tackled in the last years the challenges associated with the brittle nature of this material, but we realize that more work is required, and that manufacturing is not yet robust enough to be considered ready at an industrial scale.
Great interest has been also stirred in recent years by the progress achieved on HTS, not only in the fabrication of demonstrators for particle physics, but also in the successful test of magnets in other fields of application such as NMR, fusion, and power generation. This shows that the performance of HTS magnets will exceed that of Nb3Sn, and that the two technologies can be complementary to produce fields in the range of 20 T, and possibly higher.
In this presentation, we lay out the European roadmap for HFM, which will build upon and advance beyond the results achieved over the past twenty years in European and international programs, most notably US LARP and HL-LHC. The R&D program, which is published in [https://doi.org/10.23731/CYRM-2022-001], has two main objectives.
The first is to demonstrate Nb3Sn magnet technology for large-scale deployment. This will involve pushing it to its practical limits in terms of performance (towards the 16 T target required by FCC-hh), and moving towards production scale through robust design, industrial manufacturing processes and cost reduction. The second objective is to demonstrate the suitability of High Temperature Superconductor (HTS) for accelerator magnet applications, providing a proof-of-principle of HTS magnet technology beyond the range of Nb3Sn, with a target in excess of 20 T. These goals are only indicative in nature since the decision on a cost-effective and practical operating field will be one of the main outcomes of the development work.
The proposed roadmap comprises three focus areas: 1) Nb3Sn and HTS conductors, 2) Nb3Sn magnets, and 3) HTS magnets. These are enabled by three cross-cutting activities: 4) materials, cryogenics, and models, 5) powering and protection, and 6) infrastructure and instruments. The methodology of the proposed program is based on sequential development happening in steps of increasing complexity and integration, from samples to small scale magnets, short magnets, and long magnets, to produce a fast-moving technology progression. We are convinced that innovation and fast turnaround are crucial to meeting the declared goals on a reasonable time scale.
The conductor activities, besides the necessary procurements, will focus on two aspects. Nb3Sn R&D will push beyond the state-of-the-art to consolidate the critical current capability establishing robust wire and cable configurations with reduced cost. These will then be the subject of a four-year period of industrialization, which will be followed by a similar period of industrial optimization. On the HTS side, the intention is to identify and qualify suitable tapes and cables and follow up with industrial production to ensure the feasibility of large unit lengths of HTS tapes with characteristics meeting the requirements for accelerator magnet applications. This HTS conductor R&D phase is expected to last for seven years.
The Nb3Sn magnet development will improve on areas of HL-LHC technology that have been found to be sub-optimal, notably the degradation associated with the fragile conductor, targeting the highest practical operating field that can be achieved. The R&D will explore design and technology variants to identify robust design options for the field level targeted. Two tracks have been defined: the development of a 12 T demonstrator of proven robustness suitable for industrialization, in parallel to the development of an accelerator demonstrator dipole reaching the ultimate field for this material, towards the target of 16 T. The magnet technology R&D will progress in steps over a projected period of seven years but is intended to provide crucial results through demonstration magnets in time for the next update of the European Strategy for Particle Physics (ESPP). Another five years are expected to be necessary to extrapolate the demonstrator results to full-length units.
R&D plans for HTS magnets focus on manufacturing and testing of sub-scale and insert coils as a vehicle to demonstrate performance and operation beyond the range of Nb3Sn. A dual objective is proposed: the development of a hybrid LTS/HTS accelerator magnet demonstrator and a full HTS accelerator magnet demonstrator, with a target of 20 T. In addition, attention will be devoted to the possibility of HTS-only magnets, operating in an intermediate temperature range (10K to 20 K), in line with the increasing societal push for energy-efficient infrastructures.
The projected duration of this phase is seven years. A least five more years would be required to develop HTS demonstrators that include all the necessary accelerator features, surpassing Nb3Sn performance or working at temperatures higher than liquid helium. Nb3Sn is today the natural reference for future accelerator magnets, but HTS represents a real opportunity if there will be a steady increase in production scale and price reduction.
High-precision intra-bunch-train beam orbit feedback correction systems have been developed and tested in the ATF2 beamline of the Accelerator Test Facility at the High Energy Accelerator Research Organization in Japan. Two systems are presented:
1) The vertical position of the bunch measured at two stripline beam position monitors (BPMs) is used to calculate a pair of kicks which are applied to the next bunch using two upstream kickers, thereby correcting both the vertical position and trajectory angle. This system was optimised so as to stabilize the beam offset at the feedback BPMs to better than 350 nm, yielding a local trajectory angle correction to within 250 nrad. Measurements with a beam size monitor at the focal point (IP) demonstrate that reducing the trajectory jitter of the beam by a factor of 4 also reduces the observed wakefield-induced increase in the measured beam size as a function of beam charge by a factor of c. 1.6.
2) High-resolution cavity BPMs were used to provide local beam stabilization at the interaction point. The BPMs were demonstrated to achieve an operational resolution of ~20 nm. With the application of single-BPM and two-BPM feedback, beam stabilization of below 50 nm and 41 nm respectively has been achieved with a closed-loop latency of 232 ns.
Axion, a hypothetical pseudo-scalar particle, is a direct consequence of Peccei-Quinn mechanism which was proposed to solve the strong CP problem in 1977. It is also a plausible candidate for dark matter. The axion feebly interacts with the Standard Model (SM) particles, which makes it extremely challenging to detect a sign of its existence. Nevertheless, there have been many efforts to search for the axion-SM interaction, the prevailing method among which is a cavity haloscope seeking for the axion-photon interaction, more suited for axion-frequencies above 100MHz. On the other hand, there is another branch of interaction, namely a coupling between the axion and the nuclear electric dipole moment (EDM), which induces an oscillating EDM at the axion Compton frequency. Storage ring EDM experiment provides a powerful method sensitive to a proton EDM as small as 10^{-29} e.cm. We extend the storage ring EDM concept to measure an oscillating EDM with a comparable sensitivity, by exploiting a new spin resonance scheme using an rf Wien filter. The new method does away with the severe spin resonance systematic error sources, by careful combination of frequencies used. We introduce this new method from a basic working principle to a projected sensitivity on the axion-EDM coupling constant.
The hybrid-symmetric lattice was studied extensively using high-precision spin and beam dynamics simulation software programs. A storage ring, where the bending is provided by electric field plates while focusing is achieved by magnetic quadrupoles with alternating sign fields, can effectively store polarized proton beams simultaneously. This is the only known configuration where the main systematic error sources cancel out effectively, i.e., the vertical electric field caused by the misalignment of the electric field plates as well as the background magnetic fields. Furthermore, a highly symmetric ring lattice effectively reduces the quadrupole misalignment requirements by several orders of magnitude making the experiment possible using only currently available technology. At $10^{-29}$ e-cm sensitivity the experiment probes New Physics at the 300 TeV mass scale. The main requirements for a successful experiment at that level are that the ring planarity needs to be within 0.1 mm, while the separation of the counter-rotating beams needs to be kept below 0.01 mm, both of which are well within the realms of todayโs technology.
The energy frontier of particle physics is pushed forward by implementation of innovative technologies and approaches. Bent crystals can be used as a novel type of beam optics with steering power comparable to that of a magnetic dipole up to over $10^3$ Tesla by exploiting the phenomenon of planar channeling. Several applications in accelerators have been proposed, such as beam extraction, and collimation. The latter case is being currently investigated at CERN as a possible upgrade for the High-Luminosity LHC project. The state-of-the-art fabrication process of the samples supplied by INFN to CERN will be described. The ultimate upgrade of this technology may be achieved through a high precision machining of a carefully designed microstructure on the bent crystal, which could enhance steering efficiency beyond current limits, up to 100%. The most recent strategies and results of the new GALORE project in order to accomplish this important development will be reported as well.
We present high statistics measurements of primary cosmic rays Helium, Carbon, Oxygen, Neon, Magnesium, Silicon, Sulfur, Iron, and Nickel with AMS. The properties of He-C-O, Ne-Mg-Si, S, Fe and Ni fluxes are discussed.
We present high statistics measurements of AMS-02 of the secondary cosmic rays Lithium, Beryllium, Boron, and Fluorine. The properties of the secondary cosmic ray fluxes and their ratios to the primary cosmic rays Li/C, Be/C, B/C, Li/O, Be/O, B/O, and F/Si are discussed. A systematic comparison with the latest GALPROP cosmic ray model is presented.
Cosmic Nitrogen, Sodium, and Aluminum nuclei are a combination of primaries, produced at cosmic-ray sources, and secondaries resulting from collisions of heavier primary cosmic rays with the interstellar medium. We present high statistics measurements of the N, Na and Al rigidity spectra. We discuss the properties and composition of their spectra and present a novel model-independent determination of their abundance ratios at the source. The systematic comparison with the latest GALPROP cosmic ray model is presented.
Deuterons and ยณHe represent a few per cent of the cosmic-ray nuclei. They are mainly produced by fragmentation reactions of primary cosmic โดHe nuclei on the interstellar medium and represent a very sensitive tool to verify and constrain CR propagation models in the galaxy, providing additional information to that of the cosmic B/C ratio. Precision measurements of the deuteron and ยณHe fluxes obtained with a high-statistics data sample collected by the AMS-02 aboard the International Space Station will be presented.
Beryllium nuclei are expected to be mainly produced by the fragmentation of primary cosmic rays (CR) during their propagation. Therefore, their measurement is essential in the understanding of cosmic ray propagation and sources. In particular, the $^{10}$Be/$^9$Be ratio can be used as a radioactive clock providing the measurement of CR residence time in the Galaxy. In this contribution, the measurement of the $^7$Be, $^9$Be, and $^{10}$Be fluxes and ratios based on data collected by AMS are presented.
Analysis of anisotropy of the arrival directions of galactic protons, helium, carbon and oxygen has been performed with the Alpha Magnetic Spectrometer on the International Space Station. These results allow to investigate the origin of the spectral hardening observed by AMS in these cosmic ray species. The AMS results on the dipole anisotropy are presented along with the discussion of the implications of these measurements.
The precision measurement of daily proton fluxes with AMS during ten years of operation in the rigidity interval from 1 to 100 GV is presented. The proton fluxes exhibit variations on multiple time scales. From 2014 to 2018, we observed recurrent flux variations with a period of 27 days. Shorter periods of 9 days and 13.5 days are observed in 2016. The strength of all three periodicities changes with time and rigidity. Unexpectedly, the strength of 9-day and 13.5-day periodicities increases with increasing rigidities up to ~10 GV and ~20 GV respectively. Then the strength of the periodicities decreases with increasing rigidity up to 100 GV.
The detailed measurement of the positron fluxes from May 20, 2011 to October 29, 2019 with the Alpha Magnetic Spectrometer on the International Space Station, is presented. Time variation of the fluxes on different time scales associated with the solar activity over half solar cycle 24 is shown. The measured effect of charge sign dependent effects on particles with the same mass is discussed.
Since its launch, the Alpha Magnetic Spectrometer-02 (AMS-02) has delivered outstanding quality measurements of the spectra of cosmic-ray (CR) species, which resulted in a number of breakthroughs. Some of the most recent AMS-02 results are the measurements of the spectra of CR fluorine, sodium and aluminum up to 2 TV. Given their low solar system abundances, a significant fraction of each element is produced in fragmentations of heavier species, predominantly Ne, Mg, and Si. Using AMS-02 together with ACE-CRIS and Voyager 1 data, our calculations within the GALPROPโHELMOD framework provided updated local interstellar spectra (LIS) for these species, in the rigidity range from few MV to few TV. While the sodium spectrum agrees well with the predictions, fluorine and aluminum LIS show excesses below 10 GV, hinting at primary components. In this context, the origin of other previously found excesses in Li and Fe is discussed. The observed excesses in Li, F, and Al appear to be consistent with the local Wolf-Rayet stars hypothesis, invoked to reproduce anomalous 22Ne/20Ne, 12C/16O, and 58Fe/56Fe ratios in CRs, while excess in Fe is likely connected with a past supernovae activity in the solar neighborhood.
Many theories beyond the Standard Model predict new phenomena, such as heavy vectors or scalars, as well as vector-like quarks, in final states containing bottom or top quarks. Such final states offer great potential to reduce the Standard Model background, although with significant challenges in reconstructing and identifying the decay products and modelling the remaining background. The recent 13 TeV pp results, along with the associated improvements in identification techniques, will be reported.
We discuss a complete setup for simulations, relevant for the production of a single vector-like quark at hadron colliders, including finite width effects, signal-background interference effects and next-to-leading order QCD corrections. This procedure can be extended to include additional interactions with exotic particles. We provide quantitative results for representative benchmark scenarios for a vector-like top-partner, and we determine the role of the interference terms for a range of masses and widths of phenomenological significance.
We present results of searches for massive vector-like third-generation quark and lepton partners using proton-proton collision data collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV. Pair production of vector-like leptons is studied, with decays into final states, containing third generation quarks and leptons. Vector-like quarks are studied in both single and pair production, considering final states, containing top and bottom quarks, electroweak gauge and Higgs bosons. We search using several categories of reconstructed objects, from multi-leptonic to fully hadronic final states. We set exclusion limits on both the vector-like particle mass and cross sections, for combinations of the vector-like particle branching ratios.
We present a threshold resummation calculation for the associated production of squarks and gauginos at the LHC to the next-to-leading logarithmic (NLL) accuracy, matched to next-to-leading order (NLO) QCD corrections. Analytical results are obtained for the process-dependent soft anomalous dimension and the hard matching coefficient. Numerically, the NLL contributions increase the total NLO cross section by 2 to 8% for central scale choices and squark masses of 1 to 3 TeV, respectively, and reduce the dependence on the factorisation and renormalisation scales typically from up to ยฑ18% to below ยฑ8%. We have implemented the NLO and NLO+NLL calculations in the publicly available program Resummino.
Results from the CMS experiment are presented for supersymmetry searches targeting so-called compressed spectra. Those have small mass splittings between the different supersymmetric partners. Such a spectrum presents unique experimental challenges. This talk describes the new techniques utilized by CMS to address such difficult scenarios. The searches use proton-proton collision data with luminosity up to 138 fb-1 at the center of mass energy of 13 TeV collected during the LHC Run 2.
The direct production of electroweak SUSY particles, including sleptons, charginos, and neutralinos, is a particularly interesting area with connections to dark matter and the naturalness of the Higgs mass. The small production cross sections lead to difficult searches, despite relatively clean final states. This talk will highlight the most recent results of searches performed by the ATLAS experiment for supersymmetric particles produced via electroweak processes, including analyses targeting small mass splittings between SUSY particles. Models are targeted in both R-parity conserving as well as R-parity violating scenarios.
We study in detail the viability and the patterns of a strong first-order electroweak phase transition as a prerequisite to electroweak baryogenesis in the framework of $Z_3$-invariant Next-to-Minimal Supersymmetric Standard Model (NMSSM), in the light of recent experimental results from the Higgs sector, dark matter (DM) searches and those from the searches of the lighter chargino and neutralinos at the Large Hadron Collider (LHC). For the latter, we undertake thorough recasts of the relevant, recent LHC analyses. With the help of a few benchmark scenarios, we demonstrate that while the LHC has started to eliminate regions of the parameter space with relatively small $\mu_{\mathrm{eff}}$, that favors the coveted strong first-order phase transition, rather steadily, there remains phenomenologically much involved and compatible regions of the same which are yet not sensitive to the current LHC analyses. It is further noted that such a region could also be compatible with all pertinent theoretical and experimental constraints. We then proceed to analyze the prospects of detecting the stochastic gravitational waves, which are expected to arise from such a phase transition, at various future/proposed experiments, within the mentioned theoretical framework and find them to be somewhat ambitious under the currently projected sensitivities of those experiments.
The constituents of dark matter are still unknown, and the viable possibilities span a very large mass range. Specific scenarios for the origin of dark matter sharpen the focus on a narrower range of masses: the natural scenario where dark matter originates from thermal contact with familiar matter in the early Universe requires the DM mass to lie within about an MeV to 100 TeV. Considerable experimental attention has been given to exploring Weakly Interacting Massive Particles in the upper end of this range (few GeV โ ~TeV), while the region ~MeV to ~GeV is largely unexplored. Most of the stable constituents of known matter have masses in this lower range, tantalizing hints for physics beyond the Standard Model have been found here, and a thermal origin for dark matter works in a simple and predictive manner in this mass range as well. It is therefore a priority to explore. If there is an interaction between light DM and ordinary matter, as there must be in the case of a thermal origin, then there necessarily is a production mechanism in accelerator-based experiments. The most sensitive way, (if the interaction is not electron-phobic) to search for this production is to use a primary electron beam to produce DM in ๏ฌxed-target collisions. The Light Dark Matter eXperiment (LDMX) is a planned electron-beam fixed-target missing-momentum experiment that has unique sensitivity to light DM in the sub-GeV range. This contribution will give an overview of the theoretical motivation, the main experimental challenges and how they are addressed, as well as projected sensitivities in comparison to other experiments.
High energy e$^+$e$^-$ colliders offer unique possibility for the most general dark matter search based on the mono-photon signature. Analysis of the energy spectrum and angular distributions of photons from the initial state radiation can be used to search for hard processes with invisible final state production.
Most studies in the past focused on scenarios assuming heavy mediator exchange. We notice however, that scenarios with light mediator exchange are still not excluded by existing experimental data, if the mediator coupling to Standard Model particles is very small. We proposed a novel approach, where the experimental sensitivity to light mediator production is defined in terms of both the mediator
mass and mediator width. This approach is more model independent than the approach assuming given mediator coupling values to SM and DM particles.
Presented in this contribution are results on the expected sensitivity of the International Linear Collider (ILC) and Compact Linear Collider (CLIC) experiments to dark matter production. The use of beam polarisation can largly improve the sensitivity to DM production scenarios and reduce the impact of systematic uncertainties. Precision of mediator mass, with and coupling structure determination, in case of the signal observation, is also discussed.
Extensions of the Two Higgs Doublet model with a complex scalar singlet (2HDMS) can accommodate all current experimental constraints and are highly motivated candidates for Beyond Standard Model Physics. It can successfully provide a dark matter candidate as well as explain baryogenesis and provides gravitational wave signals. In this work, we focus on the dark matter phenomenology of the 2HDMS with the complex scalar singlet as the dark matter candidate. We study variations of dark matter observables with respect to the model parameters and present representative benchmark points in the light and heavy dark matter mass regions allowed by existing experimental constraints from dark matter, flavour physics and collider searches. We also compare real and complex scalar dark matter in the context of 2HDMS. Further, we discuss the discovery potential of such scenarios at the HL-LHC and at future e+eโ colliders.
The quest for new physics beyond the Standard Model is boosted
by the recently observed deviation in the anomalous magnetic moments of
muon and electron from their respective theoretical prediction.
In the present work, we have proposed a suitable
extension of the minimal $L_{\mu}-L_{\tau}$ model to address
these two experimental results as the minimal
model is unable to provide any realistic solution. In our model,
a new Yukawa interaction involving first generation of leptons, a
singlet vector like fermion ($\chi^{\pm}$) and a scalar (either
an SU(2)$_{L}$ doublet $\Phi^\prime_2$ or a complex singlet
$\Phi^\prime_4$) provides the additional one loop contribution to
$a_{e}$ only on top of the usual contribution coming from the
$L_{\mu}-L_{\tau}$ gauge boson ($Z_{\mu\tau}$) to both electron
and muon. The judicious choice of $L_{\mu}-L_{\tau}$
charges to these new fields results in a strongly
interacting scalar dark matter in $\mathcal{O}({\rm MeV})$ range
after taking into account the bounds from relic density,
unitarity and self interaction. The freeze-out dynamics of dark matter
is greatly influenced by $3\rightarrow2$ scatterings
while the kinetic equilibrium with the SM bath is ensured by $2\rightarrow2$
scatterings with neutrinos where $Z_{\mu\tau}$ plays a pivotal role.
The detection of dark matter is possible directly through scatterings
with nuclei mediated by the SM $Z$ bosons. Moreover, our proposed model can
also be tested in the upcoming $e^+e^-$ colliders by searching
opposite sign di-electron and missing energy signal i.e. $e^{+} e^{-}
\rightarrow \chi^{+} \chi^{-} \rightarrow e^{+} e^{-} \cancel{E}_T$
at the final state.
We consider the direct-detection rate for Majorana dark matter scattering
off nuclei in an SU(2) ร U(1) invariant effective theory and we compare it against the LHC reach. Current constraints from direct detection experiments are already bounding the mediator mass to be well into the TeV range for WIMP-like scenarios. This motivates a consistent and systematic exploration of the parameter space to map out possible regions where the direct detection rates could be suppressed.
We identify such regions and we construct consistent UV models that generate the relevant effective theory.
We then discuss the corresponding constraints from both collider and direct-detection experiments on the same parameter space. We then explore a benchmark scenario where compared to future XENONnT experiment, LHC constraints will have a greater sensitivity to the mediator mass.
In scenarios with very small dark matter (DM) couplings and small mass splittings between the DM and other dark-sector particles, so-called "coscattering" or "conversion-driven freeze-out" can be the dominant mechanism for DM production. We present the inclusion of this mechanism in micrOMEGAs together with a case study of the phenomenological implications in the singlet-triplet model. For the latter, we focus on the transition between coannihilation and coscattering processes. Indeed, we observe that coscattering is needed to describe the thermal behaviour of the DM for very small couplings, for which coannihilation is not sufficient to obtain a small enough relic density. Including coscattering processes thus opens up a new region in the parameter space of the model. The charged and neutral triplet states are often long-lived in this region; we therefore also discuss collider constraints from long-lived signatures obtained with SModelS.
Electromagnetic probes such as photons and dielectrons are a unique tool to study the space-time evolution of the hot and dense matter created in ultra-relativistic heavy-ion collisions. They are produced by a variety of processes during all stages of the collision with negligible final-state interactions. At low dielectron invariant mass ($m_{\rm ee}$), thermal radiation from the hot hadron gas contributes to the dielectron spectrum via decays of $\rho$ mesons, whose spectral function is sensitive to chiral-symmetry restoration. At larger $m_{\rm ee}$, thermal radiation from the QGP carries information about the early temperature of the medium. It is nevertheless dominated by a large background of correlated heavy-flavour hadron decays affected by energy loss and flow in the medium. Alternatively, the transverse momentum ($p_{\rm T,ee}$) of virtual direct photons, including thermal photons at low $p_{\rm T,ee}$, can be extracted from the dielectron data together with inclusive photon measurements. In proton-proton (pp) collisions, such measurement serves as a fundamental test for perturbative QCD calculations and as a baseline for the studies in heavy-ion collisions. Recently, pp collisions with high charged-particle multiplicities have been found to exhibit interesting phenomena showing surprising similarities with those in heavy-ion collisions. Low-mass dielectrons could provide additional information regarding the underlying physics processes in such collisions.
In this talk, the latest ALICE results on dielectron studies in Pb-Pb and pp collisions at the center-of-mass energies of $\sqrt{s_{\rm NN}}$ = 5.02 TeV and 13 TeV will be presented using the large data sample collected during the LHC Run 2. The results will be compared to the expected dielectron yield from known hadronic sources and predictions for thermal radiation from the medium. The production of direct photons in the different colliding systems including high-multiplicity pp collisions will be discussed.
The strong electromagnetic field generated by the colliding nuclei in heavy-ion collisions can be represented by a spectrum of photons, leading to photon-induced interactions. While such interactions are traditionally studied in ultra-peripheral collisions (UPC) without any nuclear overlap, significant enhancements of dilepton pairs and J/$\psi$ production at very low transverse momentum ($p_{T}$) above the expected hadronic interaction yields have been observed experimentally. The observed excess yields exhibit a much weaker centrality dependence compared to the hadronic production and are consistent with photon-induced interactions. The measurements of very-low-$p_T$ vector meson and dilepton production in peripheral heavy-ion collisions provide a unique opportunity to study photoproduction in collisions with well-defined and smaller impact parameters compared to that of UPC.
In 2014 and 2016, the STAR experiment recorded large samples of Au+Au collisions at $\sqrt{s_{_{\rm NN}}}$ = 200 GeV. In this presentation, we will present new measurements of very-low-$p_T$ dilepton and J/$\psi$ production in peripheral Au+Au collisions via the $\mu^+\mu^-$ channel using these datasets, which are complementary to the previous dielectron results. Distributions of invariant mass, $p_{T}^{2}$ and angular modulation will be shown. Physics implications will also be discussed together with model comparisons.
Electroweak W and Z bosons created in hard-scattering processes at the early stage of the collisions are efficient probes of the initial state of the collisions. While the measurements of W and Z bosons in pโPb and PbโPb collisions provide insights on the nuclear modification of the parton distribution functions, the results in pp collisions are a stringent test of perturbative QCD-based calculations and production mechanisms. In pp collisions, W bosons can be produced by pair annihilation but also by higher order processes with additional hadron production. An investigation of these bosons, in relation to the hadrons in the rest of the event, can give insight into multi-parton interactions in high-multiplicity events and the role of color-reconnection mechanisms.
Electroweak bosons are studied with ALICE in pp collisions at $\sqrt{s}$ = 13 TeV, pโPb collisions at $\sqrt{s_{NN}}$ = 8.16 TeV and PbโPb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV via their leptonic decays in the muon and electron channels at forward rapidity (โ4 < $\eta$ < โ2.5) and midrapidity ($|\eta|$ < 0.8), respectively. The observations in pโPb and PbโPb collisions at forward rapidity give access to low Bjorken-x values, a phase-space region poorly constrain by heavy-ion experiments.
A review of the most recent results on the production of W$^+$, W$^โ$ and Z bosons is presented. The results include differential measurements of the normalised production yields, production cross sections and nuclear modification factors as a function of rapidity, transverse momentum, collision centrality and charged particle multiplicity. The lepton-charge asymmetry measurement is also reported. A particular emphasis will be placed on the new measurement of the production of W bosons in association with hadrons as a function of the charged-particle $\sqrt{s_{NN}}$ multiplicity in pp collisions. Comparisons with theoretical model calculations, providing insights on production mechanisms and new constraints for the determination of the nuclear parton distributions functions will also be discussed.
Ultrarelativistic Heavy Ions of large charge Z are accompanied by a large flux of
Weizs\"ackerโWilliams photons. This opens up the opportunity to study a variety of photo-induced nuclear processes, as well as photon-photon processes.
We would like to present a formalism which allows to calculate differential distributions of
leptons produced in semi-central (impact parameter < 2 $\times$ nucleus radius) nucleus-nucleus collisions for a given centrality. In this approach the differential cross section is calculated using the complete polarization
density matrix of photons resulting from the Wigner distribution formalism.
We will present several differential distributions such as invariant mass of dileptons, dilepton transverse momentum and acoplanarity for different regions
of centrality. The results of the calculations will be compared to experimental data of
the STAR, ALICE and ATLAS collaboration. Very good agreement with the data is achieved without free parameters in all cases. Our new approach gives much better agreement with experimental data than the previous approaches used in the literature. This more complete approach is based on so-called Wigner distribution, because being a Wigner function, the standard photon fluxes in momentum space and impact parameter space are obtained after integration over impact parameter or momentum space.
We obtain a good description of the data without introducing additional final
state rescattering of leptons in the quark-gluon plasma. More work is thus necessary to identify
observables that can probe electromagnetic properties of the QGP.
We review recent CMS results on diffractive and exclusive processes in heavy ion collisions, including photon-induced processes in ultraperipheral collisions.
Photon-photon and photonuclear reactions are induced by the strong electromagnetic field generated by ultrarelativistic heavy-ion collisions. These processes have been extensively studied in ultra-peripheral collisions with impact parameters larger than twice the nuclear radius. Since a few years, both the photoproduction of the J/$\psi$ vector meson and the production of dileptons via photon-photon interactions have been observed in A-A collisions with nuclear overlap. Photoproduced quarkonia can probe the nuclear gluon distributions at low Bjorken-x, while the continuum dilepton production could be used to further map the electromagnetic fields produced in heavy-ion collisions and to study possible induced or final state effects in overlapping hadronic interactions. Both measurements are complementary to constrain the theory behind photon induced reactions in A-A collisions with nuclear overlap and the potential interaction of the measured probes with the formed and fast-expanding QGP medium. In this presentation, measurements of coherent J/$\psi$ photoproduction cross sections in Pb-Pb collisions in the 40%-90% centrality range, measured at midrapidity in the dielectron channel with ALICE, will be presented for the first time using the full Run 2 data. Thanks to the excellent tracking resolution of the TPC, the transverse momentum distribution of coherently photoproduced J/$\psi$ can be accurately measured. Final results on coherent J/$\psi$ photoproduction cross sections at forward rapidity in the dimuon decay channel in the 30-90% centrality range will also be shown. Finally, the measurement of an excess in the midrapidity dielectron yield at low mass and $p_{\rm T}$, in the centrality interval 50-90% will be shown. Results will be compared with available models.
Measurements of direct photons can provide valuable information on the properties and dynamics of the quark-gluon plasma (QGP) by comparing them to model calculations that describe the whole evolution of the system created in heavy-ion collisions, from the initial conditions to the pre-equilibrium, QGP, and hadronic phases.
In the ALICE experiment, photons can be reconstructed either by using the calorimeters or via conversions in the detector material. The photon conversion method benefits from an excellent energy resolution and is able to provide direct photon measurements down to $p_{T}$ = 0.4 GeV/c. For Hanbury Brown and Twiss (HBT) correlation studies, the detector setup can be exploited to combine a conversion photon with a calorimeter photon, such that near-zero opening angles are measured.
In this talk, we present the first measurements of direct photon production in PbโPb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV by ALICE, including direct photon spectra from central to peripheral events. The latest results of the first analysis of photon HBT correlations will be shown as well.
While the Higgs boson with 125 GeV was found in 2012, the Higgs sector remains unknown.
In various new physics models, the Higgs sector is often extended from its minimal form in the standard model (SM), and there are additional Higgs bosons.
Therefore, the discovery of additional Higgs bosons is clear evidence of the extended Higgs models, and the direct search of these particles is the key program at the LHC and HL-LHC.โจIn addition, we can indirectly study the extended Higgs models by measuring the deviations in the SM-like Higgs boson couplings.
Recently, it has turned out that the properties of the discovered Higgs boson are consistent with the prediction in the SM under theoretical and experimental uncertainty.
This leads us to investigate the approximate alignment scenario, where the couplings of the 125GeV Higgs boson are close to the predictions in the SM.
It is known that the decays of additional Higgs bosons into the 125GeV Higgs boson are useful channels to study the two-Higgs doublet model (2HDM), and we can comprehensively explore the parameter space of 2HDM by utilizing the synergy between the direct search of additional Higgs bosons and precision study of the 125GeV Higgs boson.
However, the size of radiative corrections is compatible with the tree-level contribution, especially in the approximate alignment scenario, and tree-level calculations are not reliable.
In this talk, we discuss the impact of radiative corrections in decays of the charged and CP-odd Higgs bosons in the 2HDM.
We show that radiative corrections sizably change the theoretical predictions for the decay branching ratios of the charged and CP-odd Higgs bosons in the approximate alignment scenario.โจIn addition, we discuss the discrimination of the four types of Yukawa interaction in 2HDM by studying the decay patterns of additional Higgs bosons.
This talk is based on NPB966 (2021) 115375 [arXiv: 2010.15057], NPB973 (2021) 115581 [arXiv: 2108.11868] and work in progress.
The first-order electroweak phase transition (FOEWPT) plays an important role in the scenario of electroweak baryogenesis. In this talk, we discuss the first-order phase transitions in a new effective field theory. We show that the SMEFT is not appropriate when we discuss the FOEWPT. We also show that the parameter regions satisfying the sphaleron decoupling condition can be searched at future collider experiments such as the HL-LHC, ILC and future gravitational wave observations such as the DECIGO. This talk is based on arXiv:2202.12774.
The Standard Model effective field theory (SMEFT) provides a general framework
to include the effects of the beyond standard model physics residing at a certain higher energy scale $\Lambda$. We focus on the modification of top-quark Yukawa coupling, which is one of the important avenues to study EWSB. With this motivation, we consider the production of the tHq process at the LHC. In this context, relevant sensitive dimension-6 SMEFT operators are identified. Those operators, relevant to the tHq production process, are constrained using the latest LHC data that are directly sensitive to these operators. We develop a strategy of constraining these operators providing a complementary way to the global-fit approach. The preferable ranges of those operators are presented along with their best fit values. Finally, we will discuss the feasibility of finding the signatures of those operators at the LHC corresponding to high luminosity
options $\rm 300~fb^{-1}$ and $\rm 3000~fb^{-1}$. This work is almost complete and we are working on preparing the draft.
We have evaluated baryon asymmetry produced by electroweak baryogenesis in aligned two Higgs doublet model, in which Yukawa interactions are aligned to avoid dangerous flavor changing neutral currents, and coupling constants of the lightest Higgs boson with the mass 125 GeV coincide with those in the standard model at tree level to satisfy the current LHC data [1]. In this model, the severe constraint from the electric dipole moment of electrons, which are normally difficult to be satisfied, can be avoided by destructive interferences between CP-violating phases in Yukawa interactions and scalar couplings in the Higgs potential. We will show some benchmark scenarios and the predictions for various future experiments under the current available data and basic theoretical bounds.
[1] K. Enomoto, S. Kanemura and Y. Mura, JHEPย 01ย (2022)ย 104
FASER$\nu$ at the LHC is designed to directly detect collider neutrinos of all three flavors and provide new measurements of their cross-sections at energies higher than those detected from any previous artificial sources. In the pilot run data during LHC Run 2 in 2018, we observed the first neutrino interaction candidates at the LHC, opening a new avenue for studying neutrinos from high-energy colliders. In 2022-2025, during LHC Run 3, we expect to collect $\sim$2,000 $\nu_e$, $\sim$6,000 $\nu_{\mu}$, and $\sim$40 $\nu_{\tau}$ charged-current interactions in FASER$\nu$, along with neutral-current interactions. In March 2022, we have installed the first physics run module into the tunnel. Here we present the physics potentials and status of FASER$\nu$.
SND@LHC is a compact and stand-alone experiment to perform measurements with neutrinos produced at the LHC in a hitherto unexplored pseudo-rapidity region of 7.2 < $\eta$ < 8.6, complementary to all the other experiments at the LHC. The experiment is to be located 480 m downstream of IP1 in the unused TI18 tunnel. The detector is composed of a hybrid system based on an 800 kg target mass of tungsten plates, interleaved with emulsion and electronic trackers, followed downstream by a calorimeter and a muon system. The configuration allows efficiently distinguishing between all three neutrino flavours, opening a unique opportunity to probe physics of heavy flavour production at the LHC in the region that is not accessible to ATLAS, CMS and LHCb. This region is of particular interest also for future circular colliders and for predictions of very high-energy atmospheric neutrinos. The detector concept is also well suited to searching for Feebly Interacting Particles via signatures of scattering in the detector target. In the first phase the detector will operate throughout LHC Run 3 to collect a total of 250 fb$^{โ1}$. The experiment was approved by the Research Board at CERN one year ago, and it is currently completing its installation and commissioning phase. A new era of collider neutrino physics is just starting.
We present a minimal extension of the Type II Seesaw neutrino mass model with a spontaneously generated CP phase. We demonstrate that this minimal model that augments the Type II Seesaw framework by an additional right handed neutrino and an inert triplet can explain the neutrino oscillation data with minimal free parameters while providing a viable dark matter candidate.
We present compact analytical expressions for neutrino oscillation probabilities, in the presence of invisible neutrino decay, where matter effects have been explicitly included. The probabilities are obtained both in the 2-flavor and 3-flavor formalisms.
The inclusion of decay leads to a non-Hermitian effective Hamiltonian, where the Hermitian component represents oscillation, and the anti-Hermitian component corresponds to invisible decay of neutrinos. These two components may not commute, leading to a mismatch between the effective mass eigenstates and the decay eigenstates of neutrinos. Even if these components commute in vacuum under certain scenarios, they will invariably become non-commuting due to matter effects.
We overcome this by employing the techniques of inverse Baker-Campbell-Hausdorff (BCH) expansion, and the Cayley-Hamilton theorem applied in the 3-flavor framework. We also obtain the probabilities in the One Mass Scale Dominance (OMSD) approximation. The analytical results thus obtained provide physical insights into possible effects of neutrino decay as it propagates through Earth matter. These results may be used for long-baseline or atmospheric neutrino oscillation experiments. We also point out certain non-intuitive features of the neutrino oscillation probability in the presence of decay, and explain them using our analytical approximations.
AMoRE (Advanced Mo-based Rare process Experiment) is an international project to search for the neutrinoless double beta (0$\nu\beta\beta$) decay of $^{100}$Mo in enriched Mo-based scintillating crystals using metallic magnetic calorimeters in a mK-scale cryogenic system. The project aims at operating the detector in a zero-background condition to detect this extremely rare decay event in the region of interest near 3.034 MeV, the Q-value of $^{100}$Mo 0$\nu\beta\beta$. The simultaneous measurement of phonon and photon signals based on the metallic magnetic calorimeter (MMC) read-outs is performed at a few tens mK temperatures to achieve a high resolution and a good background rejection. AMoRE-I, the phase following the successfully completed AMoRE-pilot, has been running with thirteen $^{48\mathrm{depleted}}$Ca$^{100}$MoO$_4$ and five Li$_2$$^{100}$MoO$_4$ crystals in the Yangyang underground laboratory, corresponding to ~3 kg of 100Mo. Since the beginning of the experiment in Sep. 2020, we have accumulated more than 300 days of physics data and analyzed over two-thirds of them. Here, we present the current status of the experiment, its analysis methods, and the most recent performance results.
Neutrino physics lies among the most obscure and fascinating sections of the Standard Model particle landscape. In particular, the measurement of their absolute mass is still an unresolved issue pursued by several experiments over the years. The state of the art concerning the model-independent $\nu$ mass hunting is KATRIN. Reaching its ultimate goal, it will push to the extreme the sensitivity limits of the spectrometric approach. An alternative for future research is the calorimetric approach. Embedding the source inside the detector, this method would avoid many systematic uncertainties due to the spectrometric configuration.
The HOLMES calorimetric experiment started in 2014 as an ERC project and is now close to starting the first data-taking period. It will set an upper limit to the $\nu_{e}$ mass aiming for a sensitivity of the eV order. At the same time, it will prove the calorimetric approach as a viable one for the $\nu_{e}$ mass direct measurement. The goal is to measure the $^{163}Ho$ electron capture by means of low-temperature microcalorimeters in a cryogenic set-up. Except for the neutrinos, the decay products of the $^{163}Ho$ nuclei are completely contained in a golden absorber. Both the EC spectrum shape and end-point would then deliver information about the escaping $\nu_{e}$ mass. The deposed energy is measured thanks to Mo-Cu Transition Edge Sensors (TES), which read temperature rises in the absorber as steep resistance jumps.
HOLMES is a challenging experiment that exploits advanced physics for both the detector and the read-out technique. My contribution will focus on the latest updates concerning the pre-measurement phase of the experiment. Thanks to several calibration measurements, the experimental set-up is now prepared and the data-taking process performs stably with arrays of 32 TESs. At the same time, pulse analysis routines and algorithms are ready to deal with the Holmium spectrum reconstruction. The detector fabrication has recently reached promising results that will lead the collaboration to a low-dose measurement with a few Bq of $^{163}Ho$ ion-implanted in each TES. During this phase, HOLMES will assess its first $\nu_{e}$ mass limit.
The Electron Capture in $^{163}$Ho experiment (ECHo) is a running experiment for the determination of the neutrino mass scale via the analysis of the end point region of the $^{163}$Ho electron capture spectrum. In the first phase, called ECHo-1k, data was collected for several months with about 60 metallic magnetic calorimeter (MMC) pixels enclosing $^{163}$Ho for an activity of about 1Bq per pixel.
The goal of this first phase is to reach a sensitivity on the effective electron neutrino mass below 20 eV/c$^2$ by the analysis of a $^{163}$Ho spectrum with more than $10^8$ events and to demonstrate the potential to upscale the ECHo technology to a substantially more sensitive experiment in a next phase. Results from the analysis of the acquired data will be presented with focus on data reduction efficiency and on the procedures to obtain the final high statistics spectrum. A preliminary analysis of the $^{163}$Ho spectral shape will be described and the expected sensitivity on the effective electron neutrino mass, on the basis of the properties of the presented spectrum, will be discussed.
We will then present how the performance obtained by the detectors during ECHo-1k have led to the development of an optimized detector system for the second phase, ECHo-100k. In ECHo-100k about 12000 MMC pixels each hosting $^{163}$Ho for an activity of 10 Bq will be simultaneously operated. A sensitivity on the effective electron neutrino mass at the 1 eV/c$^2$ level will be reached with three years of data acquisition.
This research was performed in the framework of the Research Unit FOR2202 โNeutrino Mass Determination by Electron Capture in 163Ho, ECHoโ, funded through the Deutsche Forschungsgemeinschaft (DFG). F. Mantegazzini and A. Barth acknowledge support by the Research Training Group HighRR (GRK 2058) funded through DFG.
The IsoDAR (Isotope Decay At Rest) experiment, to be installed at Yemilab in Korea, utilizes a cyclotron proton source (60 MeV) to produce an intense source of neutrinos from Li-8 decays at a level of 10^23/year, with a kiloton-scale mineral oil detector in close proximity. In addition to its neutrino oscillation program, IsoDAR can test new physics in the neutrino sector, namely non-standard neutrino interactions (NSI) and sterile neutrinos. In particular, IsoDAR has power to discriminate between sterile neutrino scenarios like the 3+2 and 3+1 variants. The beam target environment also produces a rich spectrum of nuclear excited states, which can be exploited to search for axion-like particles and other dark sector states produced in such nuclear transitions. In this talk, IsoDARโs broad physics capabilities, like the ones outlined here, will be discussed.
The High Luminosity Large Hadron Collider (HL-LHC) at CERN is expected to collide protons at a center-of-mass energy of 14 TeV and to reach the unprecedented peak instantaneous luminosity of 7 x 10^34 cm^-2 s^-1 with an average number of pileup events of 200. This will allow the ATLAS and CMS experiments to collect integrated luminosities up to 4000 fb^-1 during the project lifetime. To cope with this extreme scenario the CMS detector will be substantially upgraded before starting the HL-LHC, a plan known as CMS Phase-2 upgrade. The entire CMS inner tracker (IT) detector will be replaced and the new detector will feature increased radiation hardness, higher granularity and capability to handle higher data rate and longer trigger latency. The detector is composed of pixel sensors with pixel size of 2500 um^2 and a new ASIC, designed in 65 nm CMOS technology, powered in a novel serial scheme. The system mechanics will be lightweight, based on carbon fiber, with a CO2 cooling. In this contribution, we describe the design of the IT system along with the latest results on the system testing of the prototypes.
The Large Hadron Collider at CERN will undergo a major upgrade in the Long Shutdown 2 from 2026-2028. The High Luminosity LHC (HL-LHC) is expected to deliver peak instantaneous luminosities up to 7.5E34/cm2/s and an integrated luminosity in excess of 3000/fb during ten years of operation. In order to fully exploit the delivered luminosity and to cope with the demanding operating conditions, the whole silicon tracking system of the CMS experiment will have to be replaced. The Phase-2 Outer Tracker (OT) will have an increased radiation hardness, a higher granularity, and will be able to cope with larger data rates. A key upgrade of the CMS โdetector is to incorporate the identification of charged particle trajectories in the hardware-based (L1) trigger system. A 40 MHz silicon-based track trigger on the scale of the CMS detector has never before been built; it is a novel handle with potential to not only solidify the โCMS โtrigger strategy but to enable โsearches for โโcompletely new physics โsignatures. To achieve this, each module consists of two closely spaced sensors, which are connected to the same readout chips. The readout chips correlate data from both sensors for a rough transverse momentum measurement. This novel concept allows to keep trigger rates at a sustainable level without sacrificing physics potential. The design of the CMS Phase-2 Outer Tracker, highlights about prototyping activities, and recent L1 track trigger developments will be presented.
In the high-luminosity era of the Large Hadron Collider, the instantaneous luminosity is expected to reach unprecedented values, resulting in up to 200 proton-proton interactions in a typical bunch crossing. To cope with the resulting increase in occupancy, bandwidth and radiation damage, the ATLAS Inner Detector will be replaced by an all-silicon system, the Inner Tracker (ITk). The innermost part of the ITk will consist of a pixel detector, with an active area of about 14m2. To deal with the changing requirements in terms of radiation hardness, power dissipation and production yield, several silicon sensor technologies will be employed in the five barrel and endcap layers. Prototype modules assembled with RD53A readout chips are being built to evaluate their production rate, thermal and electrical performance, and performance before and after irradiation. In addition, the new powering scheme โ serial โ will be employed in the ITk pixel detector, which will help to reduce the material budget of the detector as well as power dissipation. Multiple system-level tests are done with serial powering of pixel modules. This contribution presents the latest development of prototype modules, serial powering tests, and procedures of integration of modules and electrical services.
The High Luminosity Large Hadron Collider (HL-LHC) is expected to provide an integrated luminosity of 4000 fb-1, that will allow to perform precise measurements in the Higgs sector and improve searches of new physics at the TeV scale.
The HL-LHC higher particle fluences and will requested radiation hardness, the increased average proton-proton pile-up interactions, require a significant scaling of the existing Inner Detector.
ATLAS is currently preparing for the HL-LHC upgrade, and an all-silicon Inner Tracker (ITk) will replace the current Inner Detector, with a pixel detector surrounded by a strip detector. The strip system consists of 4 barrel layers and 6 EC disks. After completion of final design reviews in key areas, such as Sensors, Modules, Front-End electronics and ASICs, a large scale prototyping program has been completed in all areas successfully. We present an overview of the Strip System, and highlight the final design choices of sensors, module designs and ASICs. We will summarise results achieved during prototyping and the current status of pre-production on various detector components, with an emphasis on QA and QC procedures and the preparation for the production phase distributed over many institutes, which is foreseen to start in a few months.
The high luminosity upgrade for the Large Hadron Collider at CERN requires a complete overhaul of the current inner detectors of ATLAS and CMS. These new detectors will consist of all-silicon tracking detectors. A serial powering scheme has been chosen in order to cope with the various constraints of the new detectors. In order to verify this new powering scheme and provide input for various system aspects, efforts are ongoing to set up a first larger prototype for serial powering using modules based on the new readout chips developed in 65 nm CMOS technology by the RD53 collaboration, RD53A and ITkPixV1. In particular, a serial powering stave consisting of up to 8 quad modules, either RD53A with planar sensor or ITkPixV1.1 without a sensor, has been set up in Bonn. This contribution covers the results obtained with RD53A modules and presents first measurements with a full ITkPixV1.1 serial powering chain, with emphasis on the electrical characterization of modules in a serial chain with representative services and power supplies.
The upgrade to the High-Luminosity LHC (HL-LHC), with its increase to 140-200 proton-proton collisions per bunch crossing, poses formidable challenges for track reconstruction. The Inner Tracker (ITk) is a silicon-only replacement of the current ATLAS tracking system as part of its Phase-II upgrade, designed to meet the challenges and continue to deliver high-performance track reconstruction. This contribution gives an overview of the expected performance of tracking and its impact on higher level objects. The ITk most recent layout optimisation and developments, and their impact on tracking performance, will also be reviewed.
The proposal for a next-generation rare pion decay experiment, PIONEER, has recently been approved to the Paul Scherrer Institute (PSI) ring cyclotron.
PIONEER is strongly motivated by several inconsistencies between Standard Model (SM) predictions and data pointing towards the potential violation of lepton flavor universality. It will probe non-SM explanations of these anomalies through sensitivity to quantum effects of new particles even if their masses are at very high scales.
The measurement of the charged pion branching ratio
$R_{e/\mu} = \Gamma(\pi^+\rightarrow e^+\nu(\gamma))/\Gamma(\pi^+\rightarrow \mu^+\nu(\gamma))$ for pion decays to positrons relative to muons is extremely sensitive to a wide variety of new physics effects. At present, the SM prediction for $R_{e/\mu}$ is known to the order of $10^{-4}$, which is 15 times more precise than the current experimental result. An experiment reaching the theoretical accuracy will test lepton flavor universality at an unprecedented level, probing mass scales up to the PeV range.
The measurement of the rare process of pion beta decay, $\pi^+\to \pi^0 e^+ \nu (\gamma)$, with an improvement in sensitivity by a factor of 3-10, will determine ${\left|V_{ud}\right|}$ in a theoretically pristine manner and test CKM unitarity, which is very important in light of the recently emerged tensions. In addition, various exotic rare decays involving sterile neutrinos and axions will be searched for with unprecedented sensitivity.
The experiment design benefits from experience with the PIENU and PEN experiments at TRIUMF and at PSI. Excellent energy and time resolutions, greatly increased calorimeter depth, high-speed detector and electronics response, large solid angle coverage, and complete event reconstruction are all critical aspects of the approach.
In the PIONEER experiment design, an intense pion beam is brought to rest in a segmented, instrumented (active) target (ATAR). The proposed technology for the ATAR is based on low-gain avalanche detectors (LGADs), which can provide precise spatial and temporal resolution for particle tracks and thus separate even very closely spaced decays and decay products. The proposed detector will also include a 3$\pi$ sr, 25 radiation length ($X_0$) electromagnetic calorimeter. A cylindrical tracker surrounding the ATAR is used to link the locations of pions stopping in the target to showers in the calorimeter.
This presentation will introduce the theoretical motivations for PIONEER, discuss the experiment design, and present recent results from simulations and a first testing campaign at the PSI P-5 charged pion beamline.
We report the first observation of the decay Kยฑ โ ฯ0 ฯ0 ฮผยฑ ฮฝ (K00ยต4) by the NA48/2 experiment at the CERN-SPS. From 2437 detected signal candidates with a S/B ratio of about 6, the branching ratio of the decay is determined with high precision. The result is converted into a first measurement of the R form factor in Kl4 decays and compared with the prediction from 1-loop Chiral Perturbation Theory.
The NA62 experiment at CERN collected world's largest dataset of charged kaon decays to di-lepton final states in 2016-2018, using dedicated trigger lines. Upper limits on the rates of several K+ decays violating lepton flavour and lepton number conservation, obtained by analysing this dataset, are presented.
The KOTO experiment studies the CP-violating rare decay $K_L \to \pi^0 \nu \overline{\nu}$, conducting with the 30-GeV Main Ring Proton Synchrotron at J-PARC in Japan. In the previous analysis on data taken in 2016-18, we found three candidate events in the signal region with a single event sensitivity of $7\times 10^{-10}$, which is statistically consistent with the background expectation. The dominant background source then was the charged kaon contamination in the neutral beam.
Since 2020, we have accumulated data with a new detector that detects the charged kaon in the beam to suppress such backgrounds. We are analyzing data, taken in 2021 in particular, whose statistics corresponds to a similar sensitivity. We will report the status of the analysis and plans for the next run.
The KLOE-2 Collaboration continues the KLOE long-standing tradition of flavour physics precision measurements in the kaon sector with a new $K_S \to \pi e \nu$ branching fraction measurement.
Based on a sample of 300 million $K_S$ mesons produced in $\phi \to K_L K_S$ decays recorded by the KLOE experiment
at the DA$\Phi$NE $e^+e^-$ collider, the $K_S \to \pi e \nu$ signal selection exploits a boosted decision tree built with kinematic variables together with time-of-flight
measurements.
A fit to the reconstructed electron mass distribution provides the signal yield, then normalised to $K_S \to \pi^+ \pi^-$ decays. Data control samples of $K_L \to \pi e \nu$ decays are used to evaluate signal selection efficiencies.
The combination with our previous BR($K_S \to \pi e \nu$) measurement, based on an
independent data sample, allows the total precision to be improved by almost a factor of two, and a new derivation of $f_+(0)|V_{us}|$.
The quantum interference between the decays of entangled neutral kaons is a very powerful tool for testing the quantum coherence of the entangled kaon pair state. The studied process ฯ โ KS KL โ ฯ+ ฯโ ฯ+ ฯโ exhibits the characteristic EinsteinโPodolskyโRosen correlations that prevent both kaons to decay into ฯ + ฯ โ at the same time. The newly published result is based on data sample collected with the KLOE detector at DAฮฆNE and corresponds to an integrated luminosity of about 1.7 fbโ1 , i.e. to โผ 1.7 ร 10^9 ฯ โ KS KL decays. From the fit of the observed time difference distribution of the two kaon decays, the decoherence and CPT violation parameters of various phenomenological models are measured. A stringent upper limit on the branching ratio of the ฯ โ KSKS, KLKL decay is also derived. Independently, the comparison of neutral meson transition rates between flavour and CP eigenstates allows direct and model independent tests of time-reversal T and CPT symmetries, through ratios of rates of two classes of processes:$K_S K_L \to \pi^\pm e^\mp \nu, 3 \pi^0$ and $K_S K_L \to \pi^+ \pi^-, \pi^\pm e^\mp \nu$. In addition to this a straightforward extension to the case of CPT symmetry was performed providing us with the first model independent test of CPT symmetry violation in transitions of neutral kaons.
Measurements of jet production in proton-proton collisions at the LHC are crucial for precise tests of QCD, improving the understanding of the proton structure and are important tools for searches for physics beyond the standard model. We present the most recent set of inclusive jet measurements performed using CMS data and compare them to various theoretical predictions.
A measurement of the inclusive jet production in proton-proton collisions at the LHC at $\sqrt{s}=13$ TeV is presented. The double-differential cross sections are measured as a function of the jet transverse momentum $p_t$ and the absolute jet rapidity $|y|$. The anti-$k_t$ clustering algorithm is used with distance parameter of 0.4 (0.7) in a phase space region with jet $p_t$ from 97 GeV up to 3.1 TeV and $|y|<2.0$. Data collected with the CMS detector are used, corresponding to an integrated luminosity of 36.3 /fb (33.5 /fb). The measurement is used in a comprehensive QCD analysis at next-to-next-to-leading order, which results in significant improvement in the accuracy of the parton distributions in the proton. Simultaneously, the value of the strong coupling constant at the Z boson mass is extracted as $\alpha_\mathrm{Z}= 0.1170 \pm 0.0019$. For the first time, these data are used in a standard model effective field theory analysis at next-to-leading order, where parton distributions and the QCD parameters are extracted simultaneously with imposed constraints on the Wilson coefficient $c_1$ of 4-quark contact interactions.
The production of jets and prompt isolated photons at hadron colliders provides stringent tests of perturbative QCD. We present the latest measurements using proton-proton collision data collected by the ATLAS experiment at $\sqrt{s}=13$ TeV. Prompt inclusive photon production is measured for two distinct photon isolation cones, R=0.2 and 0.4, as well as for their ratio. The measurement is sensitive to gluon parton density distribution. In addition, we present the measurements of variables probing the properties of the multijet energy flow and measurements extremely sensitive to the strong coupling constant. If ready, the determination of the strong coupling constant will be presented. The measurements are compared to state-of-the-art NLO and NNLO predictions.
Isolated photon measurements in pp and p-Pb collision systems probe the initial state of the incoming nucleon or nucleus, providing the opportunity to constrain parton and nuclear parton density functions (PDFs), and probe cold nuclear matter effects. Measurements in small collision systems also offer a baseline for Pb-Pb collision measurements.
We present measurements by ALICE of inclusive isolated photon distributions in pp collisions at $\sqrt s$ = 7, 8 and 13 TeV and in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 5.02 TeV. The kinematic reach of these measurements is $p_{\mathrm{T}} > 10$ GeV/$c$, extending previous measurements at these centre-of-mass energies down to small $x\sim10^{-3}$.
Measurements of event shapes and jet substructure observables can serve as in-depth probes of the strong interactions. Data on deep-inelastic scattering collected at the HERA $ep$ collider using the H1 detector have been analysed in the kinematic region of large momentum transfer $Q^2>150$ GeV$^2$. Various new measurements of the hadronic final state, as listed in the following, are presented and are confronted with QCD calculations and predictions from Monte Carlo generators. A precision measurement of the 1-jettiness event shape is presented as a triple-differential cross section in $Q^2$, $y$, and the event shape $\tau_1^b$. The data are sensitive to parton distribution functions, to the strong coupling $\alpha_s$, and to fragmentation effects. It is also interesting to study the effect of grooming techniques on event shapes in the clean environment of $ep$ collisions. The grooming techniques investigated here are based on the novel Centauro jet algorithm, which has the advantage to suppress soft QCD radiation in the forward (proton) direction. Two groomed event shapes are studied for various settings of the grooming parameter: the invariant jet mass and the 1-jettiness. The groomed event shape measurements show sensitivity to fragmentation on one end and multi-jet production on the other end. As such, they serve as high-precision probes of the tested QCD models and predictions. Another class of observables presented here is related to jet substructure. A number of jet substructure variables such as jet charge, particle multiplicity, and higher moments of these, are unfolded (corrected for detector effects) in a simultaneous and unbinned machine-learning approach. The results are shown in four regions of $Q^2$. Due to the unbinned nature of the unfoldings, other observables and correlations could be studied in the future. Finally, jet substructure is also investigated in terms of a charge asymmetry, defined for the leading and subleading charged particles of the jet. The charge asymmetry is studies as a function of the formation time, which gives detailed insights on the fragmentation into hadrons.
We present the first anti-kT jet spectrum and substructure measurements using the archived ALEPH e+e- data taken in 1994 at a center of mass energy of sqrt(s) = 91.2 GeV. Jets are reconstructed with the anti-kT algorithm with a resolution parameter of 0.4. It is the cleanest test of jets and QCD without the complication of hadronic initial states. The fixed center-of-mass energy also allows the first direct test of pQCD calculation. We present both the inclusive jet energy spectrum and the leading dijet energy spectra, together with a number of substructure observables. They are compared to predictions from PYTHIA6, PYTHIA8, Sherpa, HERWIG, VINCIA, and PYQUEN. None of the models fully reproduce the data. The data are also compared to two perturbative QCD calculations at NLO and with NLLโ+R resummation. The results can also serve as reference measurements to compare to results from hadronic colliders. Future directions, including testing jet clustering algorithms designed for future electron-ion collider experiments, will also be discussed.
A proper understanding of non-perturbative effects, which manifest themselves as linear power corrections, is needed to describe many observables measured at colliders. We report on recent progress in the calculation of linear power corrections to shape variables such as the $C$-parameter and thrust in the three-jet region arising from infrared renormalons. Previously, only the results at the two Sudakov shoulders, namely the two-jet and symmetric three-jet limits, have been known in the literature. We develop a formalism that allows us to compute power corrections in the entire three-jet region, and discuss its implications for the determination of the strong coupling constant $\alpha_s$. We derive a factorisation formula for the power corrections in which the so-called Milan factor naturally arises, and present analytic results for the power corrections for the $C$-parameter and the thrust in the generic $N$-jet region.
The Future Circular Collider (FCC) is a post-LHC project aiming at direct and indirect searches for physics beyond the SM in a new 100 km tunnel at CERN. In addition, the FCC-ee offers unique possibilities for high-precision studies of the strong interaction in the clean environment provided by e+e- collisions, thanks to its broad span of center-of-mass energies ranging from the Z pole to the top-pair threshold, and its huge integrated luminosities yielding $10^{12}$ and $10^8$ jets from Z and W bosons decays, respectively, as well as $10^5$ pure gluon jets from Higgs boson decays. In this contribution, we will summarize studies on the impact the FCC-ee will have on our knowledge of the strong force including: (i) QCD coupling extractions with permil uncertainties, (ii) parton radiation and parton-to-hadron fragmentation functions, (iii) jet properties (ligh-quark-gluon discrimination, e+e- event shapes and multijet rates, jet substructure, etc.), (iii) heavy-quark jets (dead cone effect, charm-bottom separation, gluon-to-cc, bb splitting, etc.); and (iv) nonperturbative QCD phenomena (color reconnection, baryon and strangeness production, Bose-Einstein and Fermi-Dirac final-state correlations...).
The use of Application Specific Integrated Circuits (ASIC) is drastically increasing in nuclear and particle physics for applications that require a large number of acquisition channels, keeping the system compact with small power consumption. This work aims to explore the possibility to use the ASIC based Citiroc-1A chip, integrated in the CAEN A5202 Fers-5200 board, to acquire ฮณ energy spectra from scintillator detectors, like Caesium Iodine (CsI), Cerium-doped Lutetium Yttrium Orthosilicate (LYSO(Ce)), and Bismuth Germanate (BGO), coupled with SiPMs. Future plans are in progress to perform measurements with faster crystals like Lanthanum Bromide (LaBr3) and Cerium Bromide (CeBr3). ASIC chips with higher shaping time than Citiroc-1A and different pulse processing were already successfully used for ฮณ spectroscopy. This would be the first time that Citiroc-1A chip, which has a maximum shaping time of 87.5 ns, is used for ฮณ spectroscopy measurements.
The A5202 board is an all-in-one system optimized to work with signals coming from SiPM, it has a total of 64 channels, provided by two Citiroc-1A chips, however the number of acquisition channels can be easily extended by synchronizing up to 128 boards through optical connections, or TDlink. The bias voltage for the SiPMs is provided by a power supply incorporated in the A5202 for all the channels, which can be finely tuned channel by channel through a DAC. The board can work in four different configurations: in spectroscopy mode (SM) the Citiroc-1A performs the classical pulse height analysis (PHA) to build energy spectra, in counting mode (CM) all the channels self trigger and the events recorded are counted inside a time interval, in timing mode (TM) the pulse time of arrival is saved in a timestamp. Finally, in spectroscopy and timing mode (STM), both the SM and TM operating configurations are active. The triggers coming from Citiroc-1A are sent to the FPGA for logical combination of the 64 channels or for timing measurement purposes with a time resolution of 500 ps. Energy spectra are built in the charge section: two preamplifiers, one with higher gain (HG) and the other with lower gain (LG) are available. The preamplification stage is followed by a slow RC-CR2 amplifier, connected to both the HG and LG preamplifiers of each channel. The available peaking times range from a minimum of 12.5 ns to a maximum of 87.5 ns, with a pitch of 12.5 ns. Finally, the shaped signal is sent to a peak sensing system which detects the maximum value and builds the energy spectrum. The peak sensing workflow consists of three phases: at first it is in the off phase, upon the arrival of a trigger the peak detector switches on and turns to the peak sensing phase to memorize the maximum value of the incoming pulse. This second phase is held until the arrival of the rising edge of a hold signal, after which the system turns to the hold phase and disconnects from the slow shaper to ensure that no other value is memorized. When the falling edge of the hold signal arrives, the peak detector goes back to the off phase.
We performed preliminary measurements with a 6x6x15mm LYSO crystal coupled with a single 6x6 mm2 SiPM and a 22Na radioactive source. We built the energy spectra with the same setup using a digitizer (DT5720A) that performs charge integration, obtaining results comparable with the A5202. Measurements with other detectors and radioactive sources are still ongoing. This high flexibility on the number of channels, combined with the ability to properly perform gamma spectroscopy with resolution compliant with the literature and with a complementary system, could be an interesting solution for experimental setups requiring a large number of acquisition lines.
Fast neutron spectroscopic measurements are an invaluable tool for many scientific and industrial applications, in particular for Dark Matter (DM) searches. In underground DM experiments, neutron induced background produced by cosmic ray muons and the cavern radioactivity, can mimic the expected DM signal. However, the detection methods are complex measurements and thus measurements remain elusive.
The use of 3He based detectors โ the most widely used technique, to date โ is not a viable solution. 3He is scarce and expensive, while the low atomic mass requires large target masses (high pressures/large volumes) that are prohibitive for underground laboratories.
A promising alternative for fast neutron spectroscopy is the use of a Nitrogen filled Spherical Proportional Counter (SPC). The neutron energy is estimated by measuring the products of the 14N(n,a)11B and 14N(n,p)14C reactions. These reactions have comparable cross sections for fast neutrons to the 3He(n,p)3He reaction. Furthermore, the use of a light element such as N2 keeps ฮณ-ray efficiency low and enhances the signal to background ratio in mixed radiation environments. This constitutes a safe, inexpensive, effective and reliable alternative. An initial proof-of-concept [1] suffered from issues such as wall effect, electron attachment and low charge collection efficiency, due to the early stages of the SPC development.
In this work, we tackle these challenges by incorporating the latest SPC instrumentation developments such as resistive multi-anode sensors [2] for high-gain operation with high-charge collection efficiency and gas purifiers that minimize gas contaminants to negligible levels. This allows us to operate with increased pressure up to 1.8 bar, reducing the wall effect and increasing the sensitivity.
Two detectors (30-cm in diameter) are used at the University of Birmingham (UoB) and at the Boulby underground laboratory, operating above atmospheric pressure. We demonstrate spectroscopic measurements of fast and thermalised neutrons from an Am-Be source and from the MC40 cyclotron facility at UoB. Additionally, the response of the detector to neutrons is simulated using a framework developed at UoB, based on GEANT4 and Garfield++ for high energy physics applications [3]. The simulation provides the expected efficiency, the pulse shape characteristics and the means to discriminate the events according to their interaction, providing a good agreement with the measurements.
[1] E. Bougamont et al., Nucl. Instrum. Meth. A 847, 10-14, (2017)
[2] I. Giomataris et al., JINST 15 P11023, (2020)
[3] I. Katsioulas et al., JINST 15 C06013 (2020)
Within the High Energy Physics community, when dealing with sensors of almost any sort, detection efficiency is certainly one of the key parameter at play. By further narrowing the field to pixel detectors, efficiencies of the order of 99% are the baseline, with far better figures actually characterising present state-of-the-art devices. Physics events are costly and time-consuming to produce, and therefore collection efficiency must be maximised. The same situation exists, for different reasons, in medical radiation applications: there aiming for maximum detection efficiency allows minimising the collateral hazard the probing radiation poses to the patient. However, the industrial and commercial field sets remarkably different boundary conditions. First, radiation sources types are very limited, mostly due to costs, state regulations, and radiation protection issues; in fact, commercial radiation facilities mostly comprises x-ray or g-ray sources of different energy and power. Second, contrary to the medical applications, most of the times the radiation dose delivered to the target is not a showstopper as it would be for a living organism.
Industrial x-ray tubes typically provide energies in the 1 keV - 300 keV range; higher energies are obtained from radioactive sources, like Ir 192 (300 keV - 600 keV), and Co-60 (1.17 and 1.33 MeV). However radioactive sources, which require careful handling and special safety protocols, are relegated to very specific radiography applications, leaving x-rays as the vastly predominant radiation type used in the industry. Modern, solid state x-rays sensors exploit the combination of a suitable sensing material coupled to a readout pixel array. There are two main embodiment of such paradigm:
1) the sensing material acts as a scintillator, converting incoming x-ray into light in the visible or near-visible spectrum; compounds of choice are usually Cesium Iodide (CsI), and Gadolinium Oxysulfide (GOS). The scintillator layer is coupled to a glass substrate with a patterned pixels array, realized in Thin Film Transistor (TFT) technology.
2) the sensing material converts the incoming x-ray into electrical charge (within a depleted semiconductor); materials of choice for the photo-conversion are Germanium (Ge), Selenium (Se), and Cadmium Telluride (CdTe). The pixel array may again be realized with the TFT glass panel technology or, otherwise, may exploit a silicon-based pixel array.
In both aforementioned embodiment, large area detectors (>200 cm2) almost universally employ the TFT panel as pixel collecting layer, as it is the cheapest and most widespread available technology for the task. TFT readout panels offer pixel sizes down to ~0.2ร0.2 mm2, and areas up to ~40ร40 cm2. Their biggest drawback is the intrinsic slowness of the readout process (basically equal to the refresh rate of TFT displays), resulting in very low frame rates, of the order of about 20 to 60 Hz for a state-of-the-art 20ร20 cm2 panel.
To target higher speed and superior overall data-throughput performance, direct coupling of the semiconductor photoc-onversion layer to a silicon-based pixel array, instead of a TFT one, is the choice, exactly has done in HEP, scientific and medical applications. Most x-ray imaging systems installed at light sources and other research facilities in fact adopt this technology. The main drawback, from a commercial point of view, comes when large areas need to be instrumented, as silicon pixel arrays, differently from TFT glass panels, are costly to develop and manufacture in size larger than few cm2, forcing the use of many of them to readout large conversion layers. Furthermore, the sensor and readout array coupling process in itself (bonding) is also expensive and time consuming. Last but not least, with both the TFT and the more expensive silicon readout, the sensing layer itself employs very expensive materials, especially for the photo-conversion layers, where Ge and CdTe are the compounds of choice. Cost is in fact the limiting factor for using large area, fast sensors in applications outside the medical and scientific fields.
We investigated a selected set of industrial x-ray imaging applications, including Computed Tomography and x-ray radiography for the food, manufacturing and logging industry, considering both the radiation flux achievable with commercial, off-the-shelf x-ray sources and the overall imaging quality dependance on the statistics (i.e. the photon count per voxel per integration time) of the collected photons and the sensor intrinsic noise and readout scheme (analogue, integrating, counting). We found that in many cases the overall detection efficiency of present industrial systems is far below what seen in science or medicine, and figures as low as few percent are present. Indeed, the (large area sensors) cost factor did push the industry to develop low-efficiency apparatuses, especially when large imaging areas and high readout speed are the application targets.
Within these specific conditions, the use of Depleted Monolithic Active Pixel Sensors (D-MAPS) sensors may provide an effective solution to the industry large area, high speed but low cost dilemma. We verified that a 500 um depleted D-MAPS has a x-rays detection efficiency in the 10 รท 100 keV range of the order of few percent which, while definitely lower than that of a CdTe or Ge sensor of equivalent thickness, it is achieved at a fraction of the cost, embeds all the readout electronic within the sensor, and ensures lower power consumption. Low power consumption is another strong industry requirement, as many time the apparatuses are mounted on rotating gigs, and cannot employ any sort of liquid cooling.
In this contribution we will report our findings about how to employ D-MAPS in industrial x-ray imaging and tomography applications, with focus on both the technical and cost aspects of the problem, and the specific requirements the sensors must fulfil for a successful deployment. We will also illustrate potential applications within the industrial world which would be enabled by future D-MAPS based x-ray sensors. Together with this brief technical and economical reviews, we will present actual x-ray characterisation of โthickโ MAPS sensors realised in commercial 110 nm technology, as well as design considerations and simulations for future sensors aimed at those applications.
The ORIGIN project (Optical Fiber Dose Imaging for Adaptive Brachytherapy), supported by the European Commission within the Horizon 2020 framework program, targets the production and qualification of a real-time radiation dose imaging and source localization system for both Low Dose Rate (LDR) and High Dose Rate (HDR) brachytherapy treatments, namely radiotherapy based on the use of radioactive sources implanted in the patientโs body.
Precise positioning of the radiation source is crucial to ensure the target area receives sufficient dose to fulfil the objective of the treatment, whilst minimizing the dose to nearby healthy tissues and organs at risk. The ORIGIN Project aims to address these shortcomings in the current treatment delivery practices, and the urgent need to provide real-time in-vivo dose imaging and source localization methods, by developing a new optical fiber-based sensor system to support diagnostics-driven therapy through enhanced adaptive brachytherapy.
This goal will be achieved through a 16-fiber sensor system, engineered to house in a clear-fiber tip a small volume of the scintillator to allow point-like measurements of the delivered dose. The selected scintillating materials feature a decay time of about 500 ฮผs and the signal associated with the primary ฮณ ray interaction results in the emission of a sequence of single photons distributed in time. Therefore, the operation requires a detector with single-photon sensitivity a system designed to provide dosimetry by photon counting. The instrument being developed is based on Silicon Photomultipliers (SiPMs), with a solution fully qualified on a single fiber prototype and currently scaled-up relying on the CITIROC1A ASIC by WEEROC, to implement an analog chain made of preamplifier, shaper, peak sensing, and discriminator, embedded in the FERS-DT5202 scalable platform designed by CAEN S.p.A.
The fiber response uniformity, system stability, sensitivity, and reproducibility are the key features for a system aiming to perform dose measurements in a clinical environment. The 16-channel dosimeter system commissioning in laboratory conditions with an X-ray cabinet demonstrates that homogeneity within 1% can be achieved following an equalization procedure.
The system performance in terms of sensitivity and measurement range together with the validity of the equalization in clinical conditions was confirmed by the first series of tests at the HDR center of the Belfast Hospital. The data analysis also confirms the system's capability to locate the source and provide a 3D dose map.
A comprehensive overview of the specification together with the qualification procedure and first results achieved in HDR clinical conditions will be presented.
Muon tomography consists in using muons naturally produced by cosmic rays interactions with the high atmosphere to probe structures in a neither invasive nor destructive way. Following the first muography of a water tower using a muon telescope based on Micro-Pattern Gaseous Detectors and developed at Commissariat ร lโรฉnergie atomique et aux รฉnergies alternatives (CEA) Saclay in 2015, the gaseous detectors and electronics have been developed to be more robust to high variations of temperature, allowing to operate in Egypt for the ScanPyramids mission and discover the existence of a big void in Khufuโs pyramid. Since then, the spectrum of applications of muon tomography kept expanding, reaching transport control or even civil engineering to monitor building stability for instance. More recently, simulations showed that detectors based on multiplexed Micromegas detectors could also be used to detect cavities for geology studies or dismantling of nuclear facility leading to several partnerships with industrials.
However, most of the muon telescopes used nowadays are based on the hodoscope approach, requiring several detectors to operate and reconstruct muon tracks with a limited angular acceptance and compacity. Probe the underground with such instrument is not realistically feasible: it would require the 50 cm $\times$ 50 cm $\times$ 1 m telescope to be installed beneath the region of interest and to be regularly rotated to scan all directions. In addition to being logistically impossible, it would take a very long time to have a final image of the total area given that the muon rate, already as low as 1 cm$^{-2}$min$^{-1}$ at sea level, rapidly decreases underground.
To expand the spectrum of muography applications, CEA is developing a highly pixelated and 2D-multiplexed compact Time Projection Chamber (TPC) that would allow a full track reconstruction with a quasi-isotropical angular acceptance to probe all directions at once. Using a TPC instead of tracker planes makes it possible to operate a single detector thus reducing the power consumption to facilitate the instrument operation in a constrained environment. Operating life is also increased by the multiplexing that divides by a factor 7 the number of electronic channels used for readout, also contributing to the better compacity of the instrument. Finally, for obvious logistic reasons, the TPC dimensions are designed so that the detector can fit into existing boreholes. Such detectors could be installed in network in order to detect, localize and characterize structures in the underground.
In this talk the design of this new detector will be presented as well as the first prototypes developments. A new technique of automatic and systematic readout plane characterization based on a 3D-printer will also be introduced.
In particle therapy, proton or ion beams deposit a large fraction of their energy at the end of their paths, i.e. the delivered dose can be focused on the tumor, sparing nearby tissue due to a low entry and almost no exit dose. A novel imaging modality using protons promises to overcome some limitations of particle therapy and will allow the full exploitation of its potential. Being able to position the so-called Bragg peak accurately inside the tumor is a major advantage of charged particles, but incomplete knowledge about a crucial tissue property, the stopping power, limits its precision. The conversion of photon attenuation maps from computed tomography (CT) scans into relative stopping power introduces range uncertainties. A proton/helium-CT scanner provides direct information about the stopping power and has the potential to reduce range uncertainties significantly, but no proton-CT system has yet been shown to be suitable for clinical use. For a proton-CT scan the particles โ typically protons or alpha particles - need to be energetic enough to traverse the patient completely, i.e. the Bragg peak is positioned in a detector. The trajectory of every outgoing proton, as well as the residual energy/range, is measured. The calculation of the proton trajectory inside the target region and the measured residual proton energy/range provide a 3D-map of the relative stopping power. During a scan, the patient needs to be rotated to obtain projection data from a set of different angles. A (clinical) prototype of an extremely high-granularity digitial tracking calorimeter has been designed and is being constructed in Bergen. The latest developments in Monolithic Active Pixel Sensors (MAPS) technology allow the fabrication of extremely-high granularity, low material budget and large area silicon detectors with integration times of microseconds and zero-suppression of the data on the sensor itself. The prototype is a silicon/absorber sandwich calorimeter with 41 sensitive layers of MAPS. A complete CT reconstruction of a simulated anthropomorphic paediatric head phantom shows that the concept of a single-sided detector setup and realistic pencil beam parameters gives a spatial resolution sufficient for proton therapy treatment planning. The expected performance based on simulations, first beam test results and the status of the construction will be presented, e.g. proton tracking accuracy, dE/dx capability, rate capability, radiation hardness and 3D spatial resolution after CT reconstruction.
The original idea that a quantum machine can potentially solve many-body quantum mechanical problems more efficiently than classical computers is due to R. Feynman who proposed the use of quantum computers to investigate the fundamental properties of nature at the quantum scale. In particular, the solution of problems in electronic structure, many-body physics, and high energy physics (just to mention a few) is a challenging computational task for classical computers as the number of needed resources increases exponentially with the number of degrees of freedom. More recently, the possibility to obtain quantum speedup for the solution of classical optimization problems became also an active field of research in, for instance, statistical physics, classical optimization, machine learning and finance. Thanks to the recent development of quantum technologies, we have now the possibility to address these classes of problems with the help of quantum computers. To achieve this goal, several quantum algorithms able to best exploit the potential quantum speedup of state-of-the-art, noisy, quantum hardware have been proposed.
After a short introduction on the state-of-the-art of digital quantum computing from a hardware and software prospective, I will present applications for the solution of problems in many-body and high energy physics, focusing on those aspects that are relevant to achieve quantum advantage with near-term and fault tolerant quantum computers. In particular, I will discuss applications on the classification of high-energy scattering events using quantum machine learning algorithms [1], as well as the development of quantum algorithms for the study of static and dynamic properties of lattice gauge models [2,3].
[1] S.L .Wu, et al. โApplication of quantum machine learning using the quantum kernel algorithm on high energy physics analysis at the LHC โ, Physical Review Research 3, 033221 (2021)
[2] S.V. Mathis, G. Mazzola, I. Tavernelli, โToward scalable simulations of lattice gauge theories on quantum computersโ, Physical Review D 102, 094501 (2021)
[3] G. Mazzola, S.V. Mathis, G. Mazzola, I. Tavernelli, โGauge-invariant quantum circuits for U(1) and Yang-Mills lattice gauge theoriesโ, Physical Review Research 3, 043209,(2021).
The mass of the W boson, one of the most important fundamental parameters in particle physics, is tightly constrained by the symmetries of the standard model. Following the observation of the Higgs boson and obtaining its measured mass, the standard model prediction of the W boson mass can be constrained to better than 10 MeV. An experimental measurement of the W boson mass to that level of precision represents a powerful test of the model. We measure the W-boson mass, M_W, using data corresponding to 8.8 fbโ1 of integrated luminosity collected in proton-antiproton collisions at 1.96 TeV center-of-mass energy with the CDF II detector during the Run 2 (2001-2011) of the Fermilab Tevatron collider. A sample of approximately four million W-boson candidates is used to obtain M_W = 80 433.5 ยฑ 6.4stat ยฑ 6.9syst = 80 433.5 ยฑ 9.4 MeV/c2, whose
precision exceeds that of all previous measurements combined. This measurement is in significant tension with the standard model expectation.
A significant displacement both from the electroweak fit predictions and previous experimental results has been observed in the recent measurement of the W boson mass at CDF. This confirms the importance of measuring this fundamental parameter of the Standard Model. The LHCb experiment has a fundamental role in this topic: it has recently measured the W boson mass by using a part of its available dataset, and plans to perform a more precise measurement with the full Run 2 dataset in the near future.
Moreover, since this measurement is performed in a complementary phase space with respect to ATLAS and CMS, it will help in reducing the total uncertainty in a future LHC combination. In this talk the experimental aspects of the LHCb measurement are presented.
The determination of the W-boson mass is a slowly evolving field, with new results only every few years. The available measurements reflect the theoretical state of the art at the time of their preparation; at hadron colliders, determining factors are the theoretical description of the W boson production and decay, and the modelling of the proton structure. We present the status of the ongoing W mass averaging project, which aims at combining Tevatron and LHC results. We will present our plans, the combination procedure, and the influence of theoretical progress in recent years.
We present results from the global fit of the Standard Model (SM) to electroweak precision measurements. The fit uses the latest theoretical calculations for observables on the Z pole and the W boson mass, yielding precise SM predictions for the effective weak mixing angle and the masses of the W and Higgs bosons, as well as the top quark. We study the impact of the latest measurements on the fit and provide comparisons of the resulting predictions for individual observables with recent measurements.
In this talk we will review the most recent global analysis of electroweak data in the Standard Model as obtained in the HEPfit framework (based on arXiv:2112.07274). Moreover, we will discuss the impact of the recent measurements of the top-quark mass (CMS collaboration) and of the W-boson mass (CDF collaboration) on the fit of electroweak data in the Standard Model and beyond with particular emphasis on constraining new physics models with oblique corrections and the dimension-six Standard Model Effective Field Theory (based on arXiv:2204.04204).
We use the Fitmaker tool to incorporate the recent CDF measurement of $m_W$ in a global fit to electroweak, Higgs, and diboson data in the Standard Model Effective Field Theory (SMEFT) including dimension-6 operators at linear order. We find that including any one of the SMEFT operators ${\cal O}_{HWB}$, ${\cal O}_{HD}$, ${\cal O}_{\ell \ell}$ or ${\cal O}_{H \ell}^{(3)}$ with a non-zero coefficient could provide a better fit than the Standard Model, with the strongest pull for ${\cal O}_{HD}$ and no tension with other electroweak precision data. We then analyse which tree-level single-field extensions of the Standard Model could generate such operator coefficients with the appropriate sign, and discuss the masses and couplings of these fields that best fit the CDF measurement and other data. In particular, the global fit favours either a singlet $Z^\prime$ vector boson, a scalar electroweak triplet with zero hypercharge, or a vector electroweak triplet with unit hypercharge, followed by a singlet heavy neutral lepton, all with masses in the multi-TeV range for unit coupling.
The FCC-ee offers powerful opportunities for direct or indirect evidence for physics beyond the Standard Model, via a combination of high precision measurements and searches for forbidden and rare processes and feebly coupled particles. The precision measurement program benefits from an extraordinary conjunction of circumstances: (i) very clean experimental conditions and excellent centre-of-mass determination at all energies from the Z to above the top quark pair production region, (ii) unprecedented statistics with in particular $5\cdot 10^{12}$ produced Z bosons, $10^8$ WW events, and more than $10^6$ ZH and ttbar events. This will allow a huge leap in precision both for the ElectroWeak Precision Observables in both neutral and charged currents, as well as for direct measurements of other fundamental SM parameters such as $\alpha_{QED}(m_Z)$, $\alpha_{S}(m_Z)$, $m_{top}$, and $m_H$. Examples will be shown of the steady work that is ongoing to understand how to improve the detector, analyses, and theory calculations in order to reduce systematic errors towards the statistical ones. Consequences on decoupling, non-decoupling, and mixing new physics will be briefly given.
The Circular Electron Positron Collider (CEPC) project aims to build a circular electron-positron collider capable of precision physics measurements. The CEPC offers the possibility of dedicated low-energy runs at the Z pole and WW threshold with a high instantaneous luminosity. The expected integrated luminosity for the CEPC Z pole runs (WW threshold runs ) is 100 $ {\rm ab}^{-1}$ (6 $ {\rm ab}^{-1}$), corresponding to $3 \times 10^{12}$ Z bosons ($1 \times 10^{8}$ W boson pairs). With large integrated luminosity, the CEPC will reach a new level of precision for measurements of the properties of the $W$ and $Z$ bosons. Precise measurements of the $W$ and $Z$ boson masses, widths, and couplings are critical to test the consistency of the Standard Model. An overview is presented of the potential of CEPC to advance precision studies of electroweak physics with an emphasis on the opportunities in W and Z physics.
The Particle Physics Community Planning Exercise (a.k.a. โSnowmassโ) is the form of organization of regular, every 6 to 8 years, discussions among the entire particle physics community to develop a scientific vision for the future of particle physics in the U.S. and its international partners. The Snowmass'21 Accelerator Frontier activities
include discussions on high-energy hadron and lepton colliders, high-intensity beams for neutrino research and for the โPhysics Beyond Collidersโ, accelerator technologies, science, education and outreach as well as the progress of core accelerator technology, including RF, magnets, targets, and sources.
Summary of the Snowmass discussions on future HEP facilities in the US will be presented.
This contribution presents the status of the HL-LHC project, the draft schedule and the associated operational scenarios. The contribution will focus on expected beam parameters, machine optics and cycles, and performance estimates.
Following the recommendations of the 2020 update of the European Strategy for Particle Physics (ESPP), CERN, in collaboration with many institutes around the globe, is investigating the feasibility of a 100 TeV centre-of-mass hadron collider with an electron-positron collider as a pre-stage.
This study builds upon the conceptual design reports delivered by the Future Circular Collider (FCC) study in 2019, which address the general design and performance expectations of both machines.
Over the course of five years, and in time for the 2025 update of the ESPP, the FCC Feasibility study is now closely investigating a potential implementation of such accelerators in a 100 km long tunnel in the Geneva basin.
Amongst other objectives, the study focuses on the demonstration of the feasibility of building the tunnel and surface areas, optimisation of the collider and injector designs, and supporting R&D for key components of the machines.
In this presentation, an overview of the design of the electron-positron collider FCC-ee will be given.
The FCC-ee aims to deliver electron-positron collision at centre-of-mass energies ranging from 88 GeV to 365 GeV and with record luminosity.
Achieving the targeted performance relies on a number of key concepts, such as the use of top-up injection and a crab waist collision scheme.
Progress on the design of these and other systems and changes implemented since the publication of the conceptual design report will be covered in this presentation.
The International Linear Collider (ILC) is an electronโpositron collider with a total length of around 20 km in its initial configuration as a 250 GeV centre-of-mass energy Higgs factory.
Key technologies at ILC are superconducting RF (SRF) acceleration in the main linacs and nano-beam technology at the interaction point (IP). A total of about 8,000 superconducting niobium cavities will be installed in the main linacs. The e-/e+ beams are focused around to 8 nm (vertical) at the interaction region.
One of the advantages of the linear collider is its energy scalability: in addition to the 250 GeV collision energy as the Higgs factory, the accelerator tunnel can be extended to more than 1 TeV, and possibly well above with further progress in SRF technology.
In addition, a sustainable implementation of the ILC has been pursued under the heading โGreen ILCโ for about 10 years. The superconducting accelerator itself is highly energy-efficient, and the luminosity per AC power has been increased using nano-beam technology. The use of waste heat, renewable energies and integration in the local communities are part of these studies.
In June 2021, IDT released its proposal for the ILC Preparatory Laboratory (Pre-lab). This document outlines the roles of the Pre-lab, which will be established prior to the construction of the ILC, as well as technical preparations related to the ILC accelerator. In July 2021, the Ministry of Education, Culture, Sports, Science and Technology (MEXT) in Japan convened the ILC advisory panel and the panel released its recommendations on February 14. In the recommendations, Pre-lab was still considered premature, but on the other hand, strengthened the accelerator-related R&D efforts was recommended. Currently, the IDT is working on organizing particularly important topics to be addressed, including increasing the international cooperation for their execution. The IDT aims to start these R&D efforts in 2023.
Recently the muon collider has been recognised as an important option to be
considered for the future of particle physics.
It is part of the European Accelerator R&D Roadmap developed in 2021 and
approved by Council. Also interest is rising in the Amerikas and in Asia,
for example demonstrated by the ongoing Snowmass process.
The presentation will give an introduction into the muon collider concept and
the identified challenges. It will also describe the R&D progress and plans.
The LHC-forward experiment (LHCf), located at the Large Hadron Collider (LHC), is designed to measure the production cross section of neutral particles in the very-forward region, covering the pseudorapidity region above 8.4 (up to zero-degree particles). By measuring the very-forward particle production rates at the highest energy possible at an accelerator, LHCf will provide fundamental informations to improve phenomenological hadronic interaction models used in the simulation of air-showers induced by ultra-high-energy cosmic rays in the atmosphere. The experiment consists of two small independent detectors placed 140 metres away from the ATLAS interaction point (IP1), on opposite sides. Each detector is made of two sampling and position sensitive calorimeters.
This contribution will focus on the Run II physics results of LHCf in proton-proton collisions at 13 TeV. At first the photon energy spectrum will be presented and compared with the predictions of several hadronic interaction models. The advantages of the ATLAS-LHCf combined analysis will then be discussed and the preliminary spectrum of very-forward photons produced in diffractive collisions (tagged by ATLAS) will be shown together with models predictions. The preliminary Feynman-x and transverse momentum spectrum of $\pi^0$, and the Feynman-x spectrum of $\eta$ will also be presented. Photons and $\pi^0$ production cross section provides important information about the electromagnetic component of an air-shower, while $\eta$ measurements give the possibility to probe the strange-quark related contribution. Finally, the neutron energy spectrum measured in several pseudorapidity regions will be shown and compared with the predictions of various hadronic interaction models. From these measurements the average inelasticity of the collisions, which strongly affects the development of an air-shower, has also been extracted.
Neutron star (NS) as the dark matter (DM) probe has gained a broad
attention recently, either from heating due to DM annihilation or
its stability under the presence of DM. In this work, we investigate
spin-$1/2$ fermionic DM $\chi$ charged under the $U(1)_{X}$
in the dark sector. The massive gauge boson $V$ of
$U(1)_{X}$ gauge group can be produced in NS via DM annihilation. The produced gauge boson can decay
into Standard Model (SM) particles before it exits NS, despite its
tiny couplings to SM particles. Thus, we perform a systematic study on $\chi\bar{\chi}\to2V\to4{\rm SM}$
as a new heating mechanism for NS in addition to $\chi\bar{\chi}\to2{\rm SM}$
and kinetic heating from DM-baryon scattering. The self-trapping due
to $\chi V$ scattering is also considered.
We assume the general framework that both kinetic and mass mixing terms between $V$ and SM gauge bosons are present. This allows both vector and axial-vector couplings between $V$ and SM fermions even for $m_V\ll m_Z$.
Notably, the contribution from axial-vector coupling is not negligible
when particles scatter relativistically.
We point out that the above approaches to DM-induced NS heating are not yet adopted
in recent analyses. Detectabilities of the aforementioned
effects to the NS surface temperature by the future telescopes are
discussed as well.
Beyond the Standard Model physics is required to explain both the baryon asymmetry of the universe and the the dark matter relic density. In this talk we discuss a setup wherein both problems could possibly be solved within an unified framework. In particular we consider a new scalar particle, that shares interaction with the Higgs boson and admits charges under the SM gauge groups, that can trigger a first order phase transition, as required for electroweak baryogenesis, and couples to a dark state consisting of a Majorana fermion.
We link state-of-art perturbative assessments of phase transition thermodynamics with the extraction of the dark matter energy density. On the one hand, resummation at two-loop order are needed, on the other hand the inclusion of the Sommerfeld enhancement and bound-state formation for the co-annihilating scalar particle is considered in the context of freeze-out dark matter. We discuss also the alternative production mechanism via freeze-in for the dark matter Majorana fermion.
We compare the model parameter space that reproduces the observed dark matter energy density with the one triggering a first order phase transition, and find that there is a substantial overlap for some regions of the parameter space. We explore the impact of the various couplings on the electroweak phase transitions and highlight the trends of the strength of the transition. Finally, we comment on the relation between the strong phase transition with the production of gravitational waves, and we determine the regions of the parameter space that are likely to produce a gravitational wave background under the reach of the LISA interferometer sensitivity.
Commonly known as Boltzmann suppression is the key ingredient to create chemical imbalance for thermal dark matter. In a degenerate/quasi degenerate dark sector chemical imbalance can also be generated from a different mechanism which is analogous to the radioactive decay law, known as co-decaying dark matter. In this work, we have studied the dynamics of a multicomponent thermally decoupled degenerate dark sector in a hidden $U(1)_X$ extension of the Standard Model. We compute the relic density and the temperature ($T^\prime$) evolution of the hidden sector by considering all possible $2\rightarrow2$ and $3\rightarrow2$ processes. We find that the production of energetic particles from $3\rightarrow2$ processes increase the temeprature of the dark sector whereas the rate of growth of temperature is decelerated due to the presence of $2\rightarrow2$ processes and expansion of the Universe. We also study the prospect of detecting neutrino and $\gamma$-ray signals from DM annihilation via one step cascade processes. We find that in the present scenario, all the existing indirect detection constraints arising from measured fluxes of atmosheric neutrinos by Super-Kamiokande and diffuse $\gamma$-rays by EGRET, Fermi-LAT, and INTEGRAL respectively can easily be evaded for the degenerate dark sector. However for the quasi degenerate scenario the constraints are significant.
We analyze the possibility that the dark matter candidate is from the approximate scale symmetry theory of the hidden scalar sector. The study includes the warm dark matter scenario and the Bose-Einstein condensation which may lead to massive dark scalar boson stars giving rise to direct detection through observation of the primary (direct) photons. The dynamical system of the scalar particles, the dilatons, at finite temperature and chemical potential is considered. The fluctuation of the particle density increases sharply within the increasing of the temperature. When the phase transition approaches, the fluctuation of the particle density has the non-monotonous rising when the ground state of the relative chemical potential tends to one with the infinite number of the particles. Our results suggest that the phase transition in the boson star may be identified through the fluctuation in yield of primary photons induced directly by the conformal anomaly. The fluctuation rate of the primary photons grows up intensively in the infra-red to become very large at the phase transition.
Self-interaction of particulate dark matter may help thermalising the galactic center and driving core formation. The core radius is expectedly sensitive to the self-interaction strength of dark matter. In this work we study the feasibility of constraining dark matter self-interaction from the distribution of core radius in isolated haloes. We perform systematic $N$-body simulations of isolated galactic haloes in the mass range of $10^{10} $-$10^{15}M_{\odot}$, incorporating the impact of dark matter self-interaction, having an interaction strength $\sigma/m $ in the range of $ (0-10) \, \rm cm^{2}/ \rm gm $. With zero scattering cross-section signifying the collision-less cold dark matter scenario. Comparing the simulated dark matter density profiles with the observational data from dwarf galaxies, low surface brightness galaxies and galaxy clusters, we provide a conservative upper limit on the self-interaction cross-section, $ \sigma/m < $ $ 9.8 $ $\ \rm cm^2 /\rm gm $ at $ 95 \% $ confidence. We also report significant dependence of the derived bounds on the galactic density distribution models assumed for the analysis.
Dark photon is one of the cold dark matter (CDM) candidates. It is predicted in the context of high-scale inflation models and a part of string theories. However, experimental constraints for its mass range at around $O$(10--100)$\,\mu\mathrm{eV}/c^2$ have not been tight yet. The dark photon CDM is predicted to convert to a photon with a weak coupling constant ($\chi$). The frequency of the conversion photon corresponds to the CDM mass because of the energy conservation ($h\nu \simeq mc^2$), e.g., a signal at 24.2 GHz corresponds to the mass of $100\,\mu\mathrm{eV}/c^2$.
DOSUE-RR (Dark-matter Observing System for Un-Exprored Radio-Range) is a series of experiments searching for the conversion photons based on technologies matured in observations of the cosmic microwave background. Hence, $O$(10--100)$\,\mathrm{GHz}$ is our target frequency range of the conversion photon. We developed a cryogenically cooled setup for the 18--26.5 GHz observations, and we succeeded to improve the experimental sensitivity compared to previous studies.
We performed our first search for the dark photon CDM in the mass range of 74--110$\,\mu\mathrm{eV}/c^2$ with the worldโs best sensitivity. In this presentation, we will present our first results. Even assuming that there is no significant signal, our constraint for $\chi$ will be tighter than them that is given by cosmological observations: $\chi < ($4--10$) \times 10^{-10}$.
AtlFast3 is the next generation of high precision fast simulation in ATLAS that is being deployed by the collaboration and will replace AtlFastII, the fast simulation tool that was successfully used until now. AtlFast3 combines a parametrization-based Fast Calorimeter Simulation and a new machine-learning based Fast Calorimeter Simulation based on Generative Adversarial Networks (GANs). The new fast simulation improves the accuracy of simulating objects used in analyses when compared to Geant4, with a focus on those that were poorly modelled in AtlFastII. In particular, the simulation of jets of particles reconstructed with large radii and the detailed description of their substructure, are significantly improved in Atlfast3. Additionally the agreement between AtlFast3 and Geant4 is improved for high momentum $\tau$-leptons. The modelling and performance are evaluated on events produced at 13 TeV centre-of-mass energy in the Run-2 data-taking conditions.
The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL). These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material causes electrons and photons to start showering before reaching the ECAL. This effect, combined with the 3.8T CMS magnetic field, leads to energy being spread in several clusters around the primary one. It is essential to recover the energy contained in these satellite clusters to achieve the best possible energy resolution. Historically, satellite clusters have been associated to the primary cluster using a purely topological algorithm which does not attempt to remove spurious energy deposits from additional pileup interactions (PU). The performance of this algorithm is expected to degrade during LHC Run 3 (2022+) because of the larger average PU levels and the increasing levels of noise due to the ageing of the ECAL detector. New methods are being investigated that exploit state-of-the-art deep learning architectures like Graph Neural Networks (GNN) and self-attention algorithms. These more sophisticated models improve the energy collection and are more resilient to PU and noise. This talk will cover the challenges of training the models and the opportunities that this new approach offers.
While simulation is a crucial cornerstone of modern high energy physics, it places a heavy burden on the available computing resources. These computing pressures are expected to become a major bottleneck for the upcoming high luminosity phase of the LHC and for future colliders, motivating a concerted effort to develop computationally efficient solutions. Methods based on generative machine learning models hold promise to alleviate the computational strain produced by simulation, while providing the physical accuracy required of a surrogate simulator.
This contribution provides an overview of a growing body of work focused on simulating showers in highly granular calorimeters, which is making significant strides towards realising fast simulation tools based on deep generative models. Progress on the simulation of both electromagnetic and hadronic showers will be reported, with a focus on the high degree of physical fidelity and computational performance achieved. Additional steps taken to address the challenges faced when broadening the scope of these simulators, such as those posed by multi-parameter conditioning, will also be discussed.
During Run 2 of the Large Hadron Collider at CERN, the LHCb experiment has spent more than 80% of the pledged CPU time to produce simulated data samples. The upcoming upgraded version of the experiment will be able to collect larger data samples, requiring many more simulated events to analyze the data to be collected in Run 3. Simulation is a key necessity of analysis to interpret signal vs background and measure efficiencies. The needed simulation will far exceed the pledged resources, requiring an evolution in technologies and techniques to produce these simulated samples. In this contribution, we discuss Lamarr, a Gaudi-based framework to deliver simulated samples parametrizing both the detector response and the reconstruction algorithms. Generative Models powered by several algorithms and strategies are employed to effectively parametrize the high-level response of the single components of the LHCb detector, encoding within neural networks the experimental errors and uncertainties introduced in the detection and reconstruction process. Where possible, models are trained directly on real data, resulting into a simulation process completely independent of the detailed simulation used to date.
Tau leptons are a key ingredient to perform many Standard Model measurements and searches for new physics at LHC. The CMS experiment has released a new algorithm to discriminate hadronic tau lepton decays against jets, electrons, and muons. The algorithm is based on a deep neural network and combines fully connected and convolutional layers. It combines information from all individual reconstructed particles near the tau axis with information about the reconstructed tau candidate and other high-level variables. Many CMS Run 2 analyses have already benefitted from the improvement brought in performance. The algorithm is presented together with its measured performance in CMS Run 2 data.
The LHCb experiment is currently undergoing its Upgrade I, which will allow it to collect data at a five-times larger instantaneous luminosity. In a decade from now, the Upgrade II of LHCb will prepare the experiment to face another ten-fold increase in instantaneous luminosity. Such an increase in event complexity will pose unprecedented challenges to the online-trigger system, for which a solution needs to be found. On one side, the current algorithms would be too slow to deal with the high level of particle combinatorics. On the other side, the event size will become too large to afford the persistence of all the objects in the event for offline processing. This will oblige to make a very accurate selection of the interesting parts in each event for all the possible channels, which constitutes a gargantuan task. In addition to the challenges for the trigger, the new conditions will also bring a large increase in background levels for many of the offline data analyses, due to the enlarged particle combinatorics.
We propose a combined solution to the previous problems that has never been attempted before at the LHCb experiment due to its complexity: the substitution of the current signal-based trigger approach by a Deep-learning based Full Event Interpretation (DFEI) method. Specifically, we propose a new algorithm that would process in real time the final-state particles of each event, identifying which of them come from the decay of a beauty or charm heavy hadron and reconstructing the hierarchical decay chain through which they were produced. This high-level reconstruction would allow to automatically and accurately identify the part of the event which is interesting for physics analysis, allowing to safely discard the rest of the event. Complementary, it would provide an automatised and powerful way to suppress the background in many future LHCb analyses. All in all, a DFEI approach can revolutionise the way event reconstruction is performed at LHCb and pave the way for a step-change in its physics reach.
In this talk, we present the conceptualisation, construction, training and performance of the first prototype of the DFEI algorithm, specialised for charged particles produced in beauty-hadron decays. The algorithm is based on a composition of Graph Neural Network (GNN) models, designed to handle the complexity of high-multiplicity events in a computationally-efficient way. To be processed, each collision event is transformed into a graph, where the final-state particles are represented as nodes and the relations between them are represented as edges. A first GNN model has the goal of removing a fraction of the nodes that have not been produced in the decay of any beauty hadron. The output of that model is passed as input to a second one, whose aim is to remove a fraction of the edges between particles that donโt share the same beauty-hadron ancestor. Finally, a third GNN model takes the output of the previous algorithm, and aims at inferring the so-called โlowest common ancestorโ (LCA) of each edge (a technique similar to the recently proposed LCA-matrix reconstruction for the Belle II experiment). The output of the DFEI processing chain can be directly translated into a set of filtered final-state particles and their inferred ancestors, with the predicted hierarchical relations amongst them.
The algorithm has been trained on simulation events containing at least one beauty hadron (inclusive decay), obtained with a PYTHIA-based simulation in which the particle-collision conditions expected for the LHC Run 3 are replicated, and an approximate emulation of the LHCb detection and reconstruction effects is applied. Only particles in the LHCb geometric acceptance are considered, which leads to an average of around 150 charged particles per event, out of which typically less than 10 have been produced in the decay of (up to several) beauty hadrons. Graphic Processing Units (GPU) are used as hardware accelerators to reduce training times. The final algorithm shows a very good performance when evaluated on the described simulation dataset, with negligible overtraining visible. These first results give promising prospects towards an eventual usage of the algorithm in LHCb, and open the door to future developments and expansions for it.
The super-weak model is a particle physics model which extends the Standard Model (SM) by a new U(1) gauge symmetry. In addition to the new mediator $Z'$, a scalar particle $\chi$ is added to deal with the meta-stability of the SM vacuum, and right-handed neutrinos are introduced to account for the non-vanishing neutrino masses. In this talk, we investigate the cosmological implications of such an extension with our main focus being on dark matter production. We find that a light -- mass of $\mathcal{O}(10)$ MeV -- sterile neutrino can play the role of dark matter with a non-vanishing parameter space. We investigate present experimental bounds on the model parameters, both from particle physics experiments as well as from astrophysical observations.
We perform an analysis of leptogenesis in the context of a simple extension of the Standard Model with two fermions, one charged ($\chi $) and one neutral ($\psi$), in addition to three right-handed neutrinos, interacting through a charged gauge singlet scalar $S$. The dark sector ($\chi$, $\psi$ and $S$) interacts feebly and produces a relic density consistent with the existing data. The right-handed neutrinos decay into the charged scalar $S$ and a lepton, providing an additional source of CP asymmetry, along with contributing through the virtual exchange of $S$ in the standard decay channel. The advantage of this scenario is that it can generate naturally the observed baryon asymmetry of the universe, even for right-handed neutrino masses in 10 TeV region, without requiring neutrinos to be degenerate.
Present and upcoming neutrino experiments can be used to probe Dark Sectors (DS).
We consider light DS interacting with the SM through well-motivated irrelevant portals. In our model independent approach, the DS is only characterized by two scales: the cut-off scale ฮUV of the irrelevant portals , and the mass gap ฮIR of the DS, identified with the mass of its lightest particle (LDSP). If the energy of the production interactions is separated from the two DS scales, the theory is approximately scale invariant, and allows to compute the production rates in a model independent fashion. The DS production happens mainly through decay of mesons produced in interactions between protons of the beam and target nuclei, partonic production through Drell-Yan process between nucleons, and bremsstrahlung from the beam protons.
After production, the produced DS excitations can reach the neutrino detector, placed downstream from the target at a distance of the order of kilometers, and decay inside of it through the irrelevant portals back to SM particles.
These events are used to place bounds on the parameter space of the DS.
In general neutrino experiments are able to probe new regions of parameter space, inaccessible in high energy experiments, given the different distance of the detectors compared to the typical length scale of collider detectors. Future neutrino experiments also have the advantage to be able to start collecting data on a fairly shorter time scale, as compared to other proposed experiments.
We discovered a chiral enhancement in the production cross-sections of massive spin-2 gravitons, below the electroweak symmetry breaking scale, that makes them ideal dark matter candidates for the freeze-in mechanism. The result is independent on the physics at high scales, and points towards masses in the MeV range. The graviton is, therefore, a warm dark matter particle, as favoured by the small scale galaxy structures. We apply the novel calculation to a Randall-Sundrum model with three branes, showing a significant parameter space where the first two massive gravitons saturate the dark matter relic density.
We study a minimal model of pseudo-Dirac dark matter, interacting through transition electric and magnetic dipole moments. Motivated by the fact that xenon experiments can detect electrons down to โผkeV recoil energies, we consider O(keV) splittings between the mass eigenstates. We study the production of this dark matter candidate via the freeze-in mechanism. We discuss the direct detection signatures of the model arising from the down-scattering of the heavier state, that are produced in Solar upscattering, finding observable signatures at the current and near-future xenon based direct detection experiments. We also study complementary constraints on the model from fixed target experiments, lepton colliders, supernovae cooling and cosmology. We show that next generation xenon experiments can either discover this well motivated and minimal dark matter candidate, or constrain how strongly inelastic dark matter can interact via the dipole moment operators.
We study scenarios where Dark Matter is a weakly interacting particle (WIMP) embedded in an ElectroWeak (EW) multiplet. In particular, we consider both real SU(2) representations with hypercharge $Y=0$, that automatically avoid direct detection constraints from tree-level $Z$-exchange, and complex ones with $Y\neq 0$. In the latter case, the minimal inelastic splitting between the DM and its EW neutral partner allows only multiplets with $Y=1/2$ and $Y=1$. We compute for the first time \emph{all the calculable thermal masses} for scalar and fermionic WIMPs up to largest multiplets allowed by perturbative unitarity, including Sommerfeld enhancement and bound states formation at leading order in gauge boson exchange and emission. We then outline a strategy to probe these scenarios at future experiments. Real candidates and, for the minimal allowed splitting, most of the complex multiplets can be fully probed at future large exposure direct detection experiments. In the complex case, direct detection can cover most of the parameter space spanned by mass splittings, except for limited regions falling below the neutrino floor due to accidental cancellations. The existence of these regions represents a major motivation for a future muon collider, which can efficiently probe all EW multiplets up to the 5-plets by means of missing mass, stub and charged track searches.
Circular colliders have the advantage of delivering collisions to multiple interaction points (up to 4 IPs for e+e- collisions at the FCC-ee facility) that allow for different detector designs to be studied and optimized individually aiming at complementary physics target studies. On the one hand, the detectors must satisfy the constraints imposed by the invasive interaction region layout. On the other hand, the performance of heavy-flavour tagging, particle identification, tracking, and particle-flow reconstruction, and of lepton, jet, missing energy, and angular resolution, need to match the physics program and the exquisite statistical precision offered by FCC-ee. During the FCC feasibility study (2021-2025), benchmark physics processes will be used to determine, via appropriate simulations, the requirements on the detector performance or design that must be satisfied to ensure that the systematic uncertainties of the measurements are commensurate with their statistical precision (which is as low as $10^{-6}$ for the e+e- running at the Z boson pole). Preliminary studies, which are a crucial input to further optimization of the two baseline concepts, IDEA and CLD, and to the development of new concepts, will be presented here.
The Circular Electron Positron Collider is a proposed, high luminosity factory for massive SM particles. It aims to deliver millions of Higgs bosons, trillions of Z bosons, hundreds millions of W bosons in 10 - 20 years of data taking, and has the potential to upgrade its center of mass energy to 360 GeV, producing decent statistics of t-tbar events. The CEPC is expected to search for New Physics principles via though not only precision Higgs measurement, but also precise EW, Flavor, QCD measurements and direct New Physics signal hunting.
The high luminosity and multifold observations of CEPC makes it extremely difficult, and extremely important, to design and optimize the CEPC detector system. In this talk, I will summarize the challenges and current status of the CEPC experimentation, as well as the key detector performance requirements quantified with benchmark physics analyses.
The future circular electron-positron collider (FCCee) is receiving much attention in the context of the FCC Feasibility Study currently in progress in preparation for the next EU strategy update. We present IDEA, a detector concept optimized for FCCee and composed of a vertex detector based on DMAPS, a very light drift chamber, a silicon wrapper, a dual readout calorimeter outside a thin 2 Tesla solenoid and muon chambers inside the magnet yoke. In particular we discuss the physics requirements and the technical solutions chosen to address them. We then describe the detector R&D currently in progress and show the expected performance on some key physics benchmarks.
The IDEA drift chamber is designed to provide efficient tracking, a high precision momentum measurement, and excellent particle identification by exploiting the cluster counting technique. The ionization process by charged particles is the primary mechanism used for particle identification (dE/dx). However, the significant uncertainties in the total energy deposition represent a limit to the particle separation capabilities. The cluster counting technique (dN/dx) takes advantage of the Poisson nature of the primary ionization process and offers a more statistically robust method to infer mass information. A simulation of the ionization clusters generation is needed to investigate the potential of the cluster counting techniques on physics events. For this purpose, an algorithm, which uses the energy deposit information provided by the Geant4 software tools, has been developed to reproduce the cluster size and the cluster density distributions in a fast and convenient way. The results obtained confirm that the cluster counting technique allows reaching a resolution two times better than the traditional dE/dx method. To validate the simulations results, a first beam test, using a 165 GeV/c muon beam on a setup made of different size drift tubes, equipped with different diameter sense wires, has been performed at CERN by collecting data with two gas mixtures (90% He - 10% iC4H10 and 80% He - 20% iC4H10) at different gas gains and angles between wire direction and ionizing tracks. The main goal of the beam test is: to ascertain the Poisson nature of the cluster counting technique, to establish the most efficient cluster counting and electrons clustering algorithms among the various ones proposed, and to define the limiting effects for a fully efficient cluster counting, like the cluster dimensions, the space charge density around the sense wire and the dependence of the counting efficiency versus the impact parameter. The ionization clustering simulation algorithms and the experimental beam test results will be presented in this talk.
The IDEA detector concept for a future e+eโ collider adopts an ultra-low mass drift chamber as central tracking system. The He based ultra-low mass drift chamber is designed to provide efficient tracking, a high precision momentum measurement, and excellent particle identification by exploiting cluster counting technique. Studies with the Garfield++ simulation confirm that the cluster counting technique allows reaching a resolution two times better than the traditional charged particles mechanism dE/dx method. To study the impact of the cluster counting technique on physics events, an algorithm, which uses the energy deposit information provided by the Geant4 simulations, has been developed to reproduce the cluster size and the cluster density distributions in a fast and convenient way. This work describes the expected tracking performance, obtained with full simulation, for track reconstruction and particle identification on detailed simulated physics events. Moreover, the details of the drift chamber's construction parameters, including the inspection of new material for the wires, new techniques for soldering the wires, the development of an improved schema for the drift cell, and the choice of a gas mixture will be described.
The Circular Electron Positron Collider (CEPC) is designed to operate at center-of-mass energies of 240 GeV as a Higgs factory, as well as at the Z-pole and the WW production threshold for electroweak precision measurements and study of flavor physics. A good particle identification on charged hadrons is essential for the flavor physics and jet study. To meet this requirement, a tracker with a drift chamber between the silicon inner tracker (SIT) and silicon external tracker (SET) is proposed in the CEPC 4th conceptual detector design. The drift chamber is expected to provide excellent PID with cluster counting technique.
In our study, a waveform-based full simulation has been performed, which includes waveform generation with Garfield++ program, simulation of electronics response and noise effects, as well as waveform analysis by utilizing effective peak finding algorithms. We developed a peak finding algorithm before based on traditional differential approach, it is further adapted using the realistic noise and electronics response parameters. We also explore the advantage of neural network for resolving a time-sequence problem, a recurrent neural network (RNN) algorithm shows great peak detection ability on MC simulation. Several optimizations in terms of the size of cell, choice of gas mixtures are performed to improve the PID capability. Preliminary results show that the K/pi separation power could be more than 2 $\sigma$ up to 20 GeV/c. The design of the drift chamber will be optimized based on the simulation study and the prototype test.
HEPscape is an escape room about high energy physics. The project has been designed and created by researches from the National Institute of Physics in Rome with the support of Sapienza, University of Rome.
Escape rooms for learning purposes are more and more frequent nowadays. HEPscape represents, based on the knowledge of the authors, the first particle physics escape room ever built so far. The escape room is an activity in which one finds him/herself involved in a specific environment: a series of unexpected events and challenges have to be solved in order to be able to get out of the game.
The proposed project, characterized by strong innovativeness and originality, aims to introduce the participants to the topics of high energy physics with a playful and fun approach, in which one learns by playing and collaborating in a group what does team work mean. HEPscape visitors learn while having fun some notions concerning particle physics and fundamental interactions, the functioning of the Large Hadron Collider (LHC) at CERN in Geneva and the recent or ongoing research activities of Compact Muon Solenoid (CMS) and ATLAS, two of the multi-purpose experiments operating at the LHC.
The visitors have the impression of visiting the LHC at CERN and of entering one of the experimental underground control rooms. HEPscape makes use of high-technology projectors, LED lamps controlled remotely and printed posters which replicate the control room environment. Through some clues, that are kept in lockers hidden in the room, the visitors discover how particle accelerators and high energy physics experiments are build and work. The project aims at a varied audience, from elementary school students, to high school students to adults, with attention also to people with special needs.
The riddles can be tuned to the age of the group, resulting in a fun and affordable experience for everyone. HEPScape is made of portable equipments that can be transported and assembled in less than two hours. This allows to use it in science fairs and exhibitions. In addition it can be brought on demand to high schools also in remote places. The possibility of creating the escape room in different languages โโis also envisaged.
So far HEPscape has registered more than 1000 visitors. Feedback and experiences from two science fairs happened in Italy in 2021 will be also presented in the talk.
This contribution presents โThe Hidden Force. Women Scientists in Physics and in Historyโ, where the lives of four twentieth-century female scientists, who overcame the stereotypes of their era, invite us to discover the importance of women's contributions to the advancement of knowledge. This is actually more than a theatre play for science outreach, it is a project to nurture a dialog with the public on Physics and History, and to tell and celebrate women who always played a crucial, but less recognized, role in Science and Society. The piรจce was created by a team of women researchers in physics, technological innovation, history and theatre, who combined their skills and experiences to talk about Science through the poetic word, the song, the scenic space: this work demonstrates that Art can complement and enrich the STEM (Science, Technology, Engineering and Mathematics) know-how, to create a truly STEAM project in Education.
The show offers a view of the complexity of the twentieth century through the eyes of four women physicists, who were only partly credited protagonists, in spite of their important discoveries and their ingenuity: the nuclear physicist Marietta Blau, the particle physicists Chien-Shiung Wu and Milla Baldo Ceolin and the astronomer Vera Cooper Rubin. Their work ranged from innovative methods to reveal the essence of nuclear processes to experiments on their hidden symmetries, from the elusive nature of neutrinos to the observation of distant galaxies. Their scientific and personal lives were intertwined with the social and historical changes of an international context characterized by great upheavals. Their stories reveal a common fabric of strong intellectual and human value, talent and determination, which led them to achieve fundamental scientific results towards a deeper understanding of nature.
Today we know that there are four fundamental forces that govern every process in the Universe, from the infinitely large distances of the cosmos to the most intimate structure of matter and radiation: they are the gravitational force, the electromagnetic force and the weak and strong nuclear forces. The theoretical and experimental study of these interactions through quantum processes between elementary particles, astrophysics, nuclear physics and gravitational phenomena, with their effects and their technological applications, are the core of the mission of research institutions such as the National Institute for Nuclear Physics (INFN). The INFN, born with Enrico Fermi and the boys of Via Panisperna, today celebrates its 70 years and drives scientific progress alongside Italian universities, in its laboratories and on the international scene.
After the Second World War, the European Council for Nuclear Research (CERN), the present largest physics laboratory in the world, was created with a dual mandate to provide excellent science and to bring nations together i.e. to guarantee science for peace. Today it is a strong voice for both Fundamental Physics and international cooperation, also with respect to the current humanitarian crises.
The century of the greatest discoveries in Science brought huge progress to the human society, while it was the century of horrors like the Holocaust or the atomic bomb. In the limited space and time of a theatre play, we could address all such essential themes, framing the passion and the roots for fundamental physics discoveries within the real lives of four exemplary women.
In the context of (Physics) Education, The Hidden Force wants to encourage all young people and especially women, through the strong emotion that art can recreate, to follow with determination their interests, their talent and their heart in the choice of their path of study and life.
In a broader panorama, the show is an excellent pretext to rekindle in each spectator, man or woman, young or not, the desire to seek and recognize the seeds of that Hidden Force, which drives us to love Science as a place of respect and civil coexistence.
The play is directed by Gabriella Bordin, performing Elena Ruzza and the soprano Fรฉ Avouglan, accompanied at the piano by Diego Mingolla. The text was written and published together by the scientific and artistic team. The music was mostly chosen by Fรฉ Avouglan, while some pieces were created expressly by the musician Ale Bavo. It was performed since October 2020 in 12 theaters and academic institutions and it will be staged in the near future in Genova, Milano, Reggio Calabria and Rovereto.
More information can be found on the web site http://laforzanascosta.to.infn.it.
Vera Cooper Rubin (1928 -2016), American astronomer who made fundamental observations on the orbits of stars around the center of their galaxy and on the distribution of galaxies in the Universe, establishing their organization in clusters. She was responsible for the discovery of the anomaly of the motion of stars in galaxies, an experimental evidence in support of the theory of dark matter formulated by Fritz Zwicky in the 1930s.
Marietta Blau (1894 -1970) was an Austrian nuclear physics, who pioneered the detection and study of the processes between elementary particles by means of photographic emulsions, establishing a method that was the basis of Nuclear Physics in the 1900s. She explored the properties of cosmic rays and high-energy particles, discovering the phenomenon of disintegrating stars in the nuclear spallation.
Chien-Shiung Wu (1912 - 1997), a Chinese nuclear physics, moved to the United States before the Second World War, became a reference in the study of beta decay and nuclear physics. She designed and carried out a famous experiment that demonstrated the violation of parity symmetry in processes dominated by weak interactions, opening new scenarios in Physics and the way to the Nobel Prize for Lee and Yang.
Milla Baldo Ceolin (1924 - 2011), an Italian particle physics, cultured and multifaceted, was the first woman to obtain the professorship at the University of Padua in 1963, where she graduated in 1952. Her research on weak interactions ranged from the study of K mesons in cosmic rays, to neutrinos and their oscillations, to the stability of matter. She experienced the transition from the โsmall scienceโ of the study of particles using nuclear emulsions to the โbig scienceโ of large accelerators.
The Extreme Energy Events Project (EEE) represented since its starting phases in 2005 a breakthrough in outreach activities in High Energy Physics. The innovative idea of EEE is a strong and direct involvement of high school students in the construction and operation of an experiment to measure Extensive Atmospheric Showers (EAS) on Earth surface.
The EEE Project is based on an array of muon telescopes each one consisting of three high performing Multigap Resistive Plate Chambers; EEE chambers were built by students and teachers at CERN, transported and installed inside Italian school buildings, where local teams help to monitor, operate the detectors and contribute to data analysis. About 60 EEE telescopes are presently installed in Italy. Since 2014, coordinated data taking periods have been performed during each year and more than 100 billions of candidate muon tracks have been collected and used for many analysis. Every year about 100 schools participate to the EEE Project, half of which without a telescope but included in the online operations, with hundreds of students and teachers involved in the activities directly correlated to EEE, but also touching different physics topics. The COVID-19 pandemic has strongly affected the experimental activities of the EEE Project, however in the last two years the online activities were strengthened, with an intense program of collaboration meetings, masterclasses and topical seminars that were organized with enormous success.
A general overview of the EEE outreach activities and the future plan will be presented.
GRaffa is a project designed and carried out by the Associazione Giovanile Le Scie Fisiche, funded by Lazio Region, with the aim of sharing the excitement of science and increasing appreciation for it, mainly through daily experiences of physical phenomena. It was conceived by young physics and philosophy researchers (mainly postdocs and PhD students) to spread physics and scientific culture by using a simple and fun language dedicated to the general public, especially school students. The main goal of the activities realized within the project is to allow people to understand that physics is not just the study of distant galaxies, particle accelerators or complex mathematical formulas. When we use the phone or ask ourselves when the moment is right to salt water to make pasta, we are already doing science!
Approaching science generates too often an "allergic reaction". This cultural problem is also frequently encountered among young students who are in the learning phase and who are hostile to scientific proposals, as they are considered too complicated. We are strongly convinced that this prejudice towards the world of science makes a spontaneous approach more difficult. Scientific dissemination work is essential to tear down this wall, in particular inside schools. It is necessary to make children understand science is no more complicated than literature or history, but that it simply uses a different language: mathematics.
Several activities have been planned and realized with GRaffa so as to spread these messages as much as possible: a scientific column on social networks on everyday life science and curiosities; a youtube channel with pop and shorts video pills; discussions and hands-on laboratories in schools; popular talks; scientific laboratories for general public during popular events, such as the European Researchers Night.
In the talk, we will present the results achieved with GRaffa, showing the online and in situ performed activities. For primary and middle school students, we designed simple, yet interesting, experiments that were able to convey how science infiltrates our everyday life in an enjoyable manner. For high school students, we prepared presentations on the structure of university courses in physics, the various theoretical and experimental branches of this fascinating discipline, as well as different job prospects.
These activities were very successful, as many students showed interest and asked several questions. Since the main goal of our association is to intrigue very young people, bridging the gap between their insecurities and their actual capabilities is crucial. The objective of the workshops, seminars and experiments was therefore to break with this "allergic reaction" and bring students, from primary school to high school, closer to the study of scientific disciplines. What we found to be particularly relevant for the effectiveness of these activities is that they were designed and promoted by a group of young researchers. This is probably because the generation gap is rather narrow, and therefore allows the younger generation to feel close to the researchers and therefore to be able to enjoy their presentations.
Gravity is, by far, one of the scientific themes that have most piqued the curiosity of scientists and philosophers over the centuries. From Aristotle to Einstein, from Hawking to now on, scientists have always put a creative effort to solve the main puzzles of the understanding of our universe: why do things move, the birth of the cosmos, dark matter, and dark energy are just a few examples of gravity-related problems. Philosophers are interested in this field, too, and when they have met physicistsโ needs, a new conceptual revolution has started. However, since Einstein's relativistic theories and the subsequent advent of quantum mechanics, physicists and philosophers have taken different paths, both kidnapped by the intrinsic conceptual and mathematical difficulties inherited by their studies. A question arises: is it possible to restore a unitary vision of knowledge, overcoming the scientific-humanistic dichotomy that has established over time? The answer is certainly not trivial, but we can start from school to experience a new vision of a unified knowledge. From this need, the โGravitasโ project has been born. โGravitasโ is a multidisciplinary outreach and educational program devoted to high school students (17-19 years old) that mixes contemporary physics and the philosophy of science. Coordinated by the Cagliari Section of the National Institute of Nuclear Physics, in Italy, โGravitasโ has started on December 2021 with an unconventional online format: two researchers coming from different fields of research (physics vs philosophy, history of science, scientific communication) meet a moderator and informally discuss about gravity and related phenomena. The public can chat and indirectly interact with them during the YouTube live using Mentimeter. The project involves 250 students from 16 high schools in Sardinia, Italy. Students have also been involved in the creation of posts thought for social media platforms whose content is based on the seminars they attended during the project. In this talk, we present the project and discuss its possible outcomes concerning the introduction of a multidisciplinary approach in teaching physics, philosophy, and the history of contemporary physics in high schools.
In 2016 INFN created a National Committee to coordinate third missione activities. One of its aim was to improve the quality of its local outreach activities. INFN results for VQR (the Italian national exercise to evaluate research quality)2011-2014 showed a clear difference in the quality of outreach activities organized by the Communication Office, and -despite the enthusiasm of the people involved, the ones organized by local units. We will describe how this issue was addressed in an holistic way: providing training, support, resources to researchers involved. We will also present some examples. The results of the new VQR (covering the period up to 2019) -just announced on April 13, showed that this effort was successfull.
We discuss the phase diagram of QCD in the presence of a strong magnetic background field, providing numerical evidence, based on lattice simulations of QCD with 2+1 flavours and physical quark masses, that the QCD crossover turns into a first order phase transition for large enough magnetic field, with a critical endpoint located between $eB = 4$ GeV$^2$ (where we found an analytic crossover at a pseudo-critical temperature $T_c = (98ยฑ3)$ MeV) and $eB = 9$ GeV$^2$ (where the measured critical temperature is $T_c = (63ยฑ5)$ MeV).
In light-front holographic QCD a Schr\"{o}dinger-like equation determines the transverse mode in the chiral limit. The supersymmetric formulation of holographic QCD each baryon has two supersymmetric partners, a meson and a tetraquark. The mass degeneracy of these partner states is lifted by the combination of two mechanisms: chiral symmetry breaking and longitudinal confinement. In this talk, we show that when 't Hooft equation determines the longitudinal mode, a good global description of the full hadron spectrum is obtained.
Scattering amplitudes are the fundamental building blocks of collider observables. Comparing high precision measurements to theory predictions requires computing them to high perturbative order. The growth in the number of loops significantly increases the complexity of the problem. Using novel mathematical methods allowed to compute QCD corrections to four-point massless processes at state-of-the-art three-loop order. We will describe these modern tools and show their example application to recently published diphoton production in gluon fusion channel. This particular process is a leading background for Higgs production in the discovery channel. The analytic amplitude which we have computed can be used to derive fully differential NNLO hadronic cross section, since the subtraction schemes are already available. Because of the interference with the signal, it can put stronger bounds on the Higgs width.
The QCD topological observables are essential inputs to obtain theoretical predictions about axion phenomenology, which are of utmost importance for current and future experimental searches for this particle. Among them, we find the topological susceptibility, related to the axion mass.
We present lattice results for the topological susceptibility in QCD at high temperatures obtained discretizing this observable via spectral projectors on eigenmodes of the staggered Dirac operator, and we compare them with those obtained with the standard gluonic definition. The adoption of the spectral discretization is motivated by the large lattice artifacts affecting the standard gluonic susceptibility, related to the choice of non-chiral fermions in the lattice action.
We calculate the relativistic six-meson scattering amplitude at low energy within the framework of QCD-like theories with $n$ degenerate quark flavors at next-to-leading order in the chiral counting. We discuss the cases of complex, real and pseudo-real representations, i.e. with global symmetry and breaking patterns $\text{SU}(n)\times\text{SU}(n)/\text{SU}(n)$ (extending the QCD case), $\text{SU}(2n)/\text{SO}(2n)$, and $\text{SU}(2n)/\text{Sp}(2n)$. In case of the one-particle-irreducible part, we obtain analytical expressions in terms of nine six-meson subamplitudes based on the flavor and group structures. We extend on our previous results [PRD 104 (2021):054046] obtained within the framework of $\text{O}(N+1)/\text{O}(N)$ non-linear sigma model, with $N$ being the number of meson flavors. This work allows for studying a number of properties of six-particle amplitudes at one-loop level. It also serves as a first step in comparing with lattice-QCD results on three-pion scattering.
In the past decade, antenna subtraction has been used to compute NNLO QCD corrections to a series of phenomenologically relevant processes. However, as for other subtraction schemes at NNLO, the application of this method proceeded in a process-dependent way, with each new calculation requiring a significant amount of work. In this talk we present an improved version of antenna subtraction which aims at achieving an automated and process-independent generation of the subtraction terms required for a NNLO calculation, as well as at overcoming some intrinsic limitations present in the traditional formulation. In this new approach, a set of integrated dipoles is used to reproduce the known infrared singularity structure of one- and two-loop amplitudes in colour space. The real-virtual and double-real subtraction terms are subsequently generated inferring their structure from the corresponding integrated subtraction terms. We demonstrate the applicability of this method computing the full-colour NNLO correction to hadronic three-jet production in the gluons-only assumption.
We present searches for additional neutral and charged Higgs boson with the CMS detector using the full run 2 dataset. The presented searches also include decays of heavy Higgs bosons into other Higgs bosons (both the 125 GeV Higgs boson and other additional Higgs bosons).
We present searches for rare and beyond-the-standard-model decays of the Higgs boson with the CMS detector using the full run 2 dataset. Amongst others, Higgs boson decays to two (pseudo-) scalars and to invisible particles are discussed.
Precision studies of the properties of the Higgs and gauge bosons may provide a unique window for the discovery of new physics at the LHC. New phenomena can in particular be revealed in the search for lepton-flavor-violating or exotic decays of the Higgs and Z bosons, as well as in their possible couplings to hidden-sector states that do not interact under Standard Model gauge transformations. This talk presents recent searches by the ATLAS experiment for decays of the Higgs and Z bosons to new particles, using collision data at $\sqrt{s}$ = 13 TeV collected during the LHC Run 2.
The discovery of the Higgs boson with the mass of about 125 GeV completed the particle content predicted by the Standard Model. Even though this model is well established and consistent with many measurements, it is not capable to solely explain some observations. Many extensions of the Standard Model addressing such shortcomings introduce additional charged and neutral Higgs-like bosons. The current status of searches for additional low- and high-mass charged and neutral Higgs bosons based on the full LHC Run 2 dataset of the ATLAS experiment at 13 TeV are presented.
In 2018 CMS reported an excess in the light Higgs-boson search in the diphoton decay mode at about 95GeV based on Run 1 and first year Run 2 data. The combined local significance of the excess was $2.8\,\sigma$. The excess is compatible with the limits obtained in the ATLAS searches from the diphoton search channel. Recently, CMS reported another local excess with a significance of $3.1\,\sigma$ in the light Higgs-boson search in the di-tau final state, which is compatible with the interpretation of a Higgs boson with a mass of about 95GeV. We show that the observed results can be interpreted as manifestations of a Higgs boson in the Two-Higgs Doublet Model with an additional real singlet (N2HDM). We find that the lightest Higgs boson of the N2HDM can fit both excesses simultaneously, while the second-lightest state is such that it satisfies the Higgs-boson measurements at 125GeV, and the full Higgs-boson sector is compatible with all Higgs exclusion bounds from the searches at LEP, the Tevatron and the LHC as well as with other theoretical and experimental constraints. Finally, we demonstrate that it is furthermore possible to accommodate the excesses observed by CMS in the two search channels together with a local $2.3\,\sigma$ excess in the $b \bar b$ final state observed at LEP in the same mass range
Motivated by results recently reported by the CMS Collaboration about an excess in the di-photon spectrum at about 96 GeV, especially when combined with another long-standing anomaly at the same value in the $b\bar b$ invariant mass spectrum in four-jet events collected at LEP, we show that a possible explanation to both phenomena can be found at 1$\sigma$ level in a generic 2-Higgs Doublet Model (2HDM) of Type-III in presence of a specific Yukawa texture, wherein Lepton Flavour Violating (LFV) (neutral) currents are induced at tree level. Bounds from Higgs data play a major role in limiting the parameter space of this scenario, yet we find solutions with $m_H = 125$ GeV and $m_h = 96$ GeV consistent with current theoretical and experimental bounds.
The extension of the Standard Model (SM) with two Higgs triplets offers an appealing way to account for both tiny Majorana neutrino masses via the type-II seesaw mechanism and the cosmological matter-antimatter asymmetry via the triplet leptogenesis. In this paper, we classify all possible accidental symmetries in the scalar potential of the two-Higgs-triplet model (2HTM). Based on the bilinear-field formalism, we show that the maximal symmetry group of the 2HTM potential is ${\rm SO(4)}$ and eight types of accidental symmetries in total can be identified. Furthermore, we examine the impact of the couplings between the SM Higgs doublet and the Higgs triplets on the accidental symmetries. The bounded-from-below conditions on the scalar potential with specific accidental symmetries are also derived. Taking the ${\rm SO(4)}$-invariant scalar potential as an example, we investigate the vacuum structures and the scalar mass spectra of the 2HTM.
One of the main physics goals of the MicroBooNE experiment at Fermilab is to perform high-statistics measurements of neutrino-argon interaction cross sections. These measurements will be essential for future neutrino oscillation experiments, including the Short-Baseline Neutrino program and the Deep Underground Neutrino Experiment (DUNE), to achieve an unprecedented level of precision. Inclusive cross-section data provide an important overall benchmark for the interaction modeling needed for these future efforts, and exclusive measurements of neutrino-induced pion production provide insight into the dominant reaction mode at the neutrino energies relevant for DUNE. In this talk, we present some of the latest neutrino-argon cross-section measurements in MicroBooNE, including new results for charged-current inclusive neutrino cross sections and pion-containing final states.
The MicroBooNE detector is a liquid argon time projection chamber (LArTPC) with an 85 ton active mass that receives flux from the Booster Neutrino and the Neutrinos from the Main Injector (NuMI) beams, providing excellent spatial resolution of the reconstructed final state particles. Since 2015 MicroBooNE has accumulated many neutrino and anti-neutrino scattering events with argon nuclei enabling searches for rare interaction channels.
The Cabibbo suppressed production of hyperons in anti-neutrino-nucleus interactions provides sensitivity to a range of effects, including second class currents, SU(3) symmetry violations and reinteractions between the hyperon and the nuclear remnant. This channel exclusively involves anti-neutrinos, offering an unambiguous constraint on wrong sign contamination. The effects of nucleon structure and final state interactions are distinct from those affecting the quasielastic channel and modify the ฮ and ฮฃ production cross sections in different ways, providing new information that could help to break their degeneracy. Few measurements of this channel have been made, primarily in older experiments such as Gargamelle [1,2].
We present the measurement of the cross section for direct (Cabibbo suppressed) ฮ production in muon anti-neutrino interactions with argon nuclei in the MicroBooNE detector, using neutrinos from the off-axis NuMI beam. The event selection and treatment of systematic uncertainties will also be described.
[1] O. Erriquez et al., Nucl. Phys. B140, 123 (1978)
[2] O. Erriquez et al., Phys. Lett. B 70, 383 (1977)
The MicroBooNE collaboration recently released a series of measurements aimed at investigating the nature of the excess of low-energy electromagnetic shower events observed by the MiniBooNE collaboration. In this talk, we will present the latest results from both a search of single photons in MicroBooNE, as well as a series of three independent analyses leveraging different reconstruction paradigms which look for an anomalous excess of electron neutrino events. We additionally will highlight new results that use these well-understood selections to perform a search for an eV scale sterile neutrino in the 3+1 oscillation framework. Constraints are presented for regions of sterile neutrino oscillation parameter space relevant to the Gallium/Reactor $\nu_e$ disappearance anomaly and LSND/MiniBooNE $\nu_e$ appearance anomalies.
The aim of this presentation is to introduce a dark extension of the SM that communicates to it through three portals: neutrino, vector and scalar mixing, by which it could be possible to explain the LEE at MiniBooNE. In the model, Heavy Neutral leptons are produced by upscattering via a dark photon, with masses around 10 MeV โ 2 GeV, and subsequently decay into an electron-positron pair and neutrinos. If sufficiently collimated or asymmetric in energy, these events can be detected as a single shower and explain the MiniBooNE LEE. We show how the model can well reconstruct the energy spectrum. We consider two cases: 3 ฮฝ + 1 HNL and 3 ฮฝ + 2 HNLs.
The Short-Baseline Near Detector (SBND) will be one of three liquid Argon Time Projection Chamber (LArTPC) neutrino detectors positioned along the axis of the Booster Neutrino Beam (BNB) at Fermilab, as part of the Short-Baseline Neutrino (SBN) Program. The detector is currently in the construction phase and is anticipated to begin operation in the first half of 2023. SBND is characterised by superb imaging capabilities and will record over a million neutrino interactions per year. Thanks to its unique combination of measurement resolution and statistics, SBND will carry out a rich program of neutrino interaction measurements and novel searches for physics beyond the Standard Model (BSM). It will enable the potential of the overall SBN sterile neutrino program by
performing a precise characterisation of the unoscillated event rate, and by constraining BNB flux and neutrino-Argon cross-section systematic uncertainties. In this talk, the physics reach, current status, and future prospects of SBND are discussed
The Short Baseline Near Detector (SBND), a 112 ton liquid argon time projection chamber, is the near detector of the Short Baseline Neutrino program at Fermilab. SBND has the unique characteristic of being remarkably close (110 m) to the neutrino source and not perfectly aligned with the neutrino beamline, in such a way that the detector is traversed by neutrinos coming from different angles with respect to the beam axis. This is known as the PRISM feature of SBND, which allows sampling of multiple neutrino fluxes using the same SBND detector. SBND-PRISM can be utilized to study distinctive neutrino-nucleus interaction and exotic physics signals.
New results of the DANSS experiment on the searches for sterile neutrinos are presented. They are based on more than 6 million inverse beta decay events collected at 10.9, 11.9, and 12.9 meters from the 3.1 GW reactor core of the Kalinin Nuclear Power Plant in Russia. A new more robust method of energy calibration is used. Different statistical approaches are compared. The neutrino spectrum dependence on the 239Pu fission fraction is presented. The reactor power was measured using the IBD event rate during 5.5 years with a statistical accuracy of 1.5% in 2 days and with the relative systematic uncertainty of about 0.5%. The status of the DANSS upgrade will be presented. This upgrade should allow DANSS to test the Neutrino-4 claim of observation of sterile neutrinos and to scrutinize even larger fraction of the sterile neutrino parameter space preferred by the recent BEST results. The cosmic muon flux dependences on temperature and pressure are presented.
ALICE has undergone a major upgrade in preparation of LHC Run 3 (2022-2025). The new Inner Tracking System is completely based on Monolithic Active Pixel Sensors, the Time Projection Chamber was equipped with GEM-based readout chambers, and the muon system was upgraded and extended by the Muon Forward Tracker. New trigger detectors were also installed to allow the clean identification of interactions. In addition, the readout of all detectors was upgraded to make use of the increased luminosity expected from the LHC. Furthermore, the computing infrastructure and software stack have been redesigned for continuous read-out including a synchronous reconstruction stage making use of 2000 GPUs to achieve the required computing performance. In this presentation, we will report on the installation of the upgraded detectors and computing farm and the first experience from operation with with pp collision.
During the long shutdown 2, the ALICE experiment undertook major detector and software upgrades bringing a paradigm shift in the operation and performance of the new detector.
Run 3 started at the end of October 2021 with the first colliding proton-proton beams, the so-called "pilot beam". On this occasion, the ALICE experiment successfully recorded pp collisions at 900 GeV, proving its readiness for future data taking campaigns. In addition to validating the data reconstruction and calibration procedure, the data collected was processed with the new offline analysis framework, pioneering physics analyses for the whole Run 3. In this contribution, we report on the pilot beam results, focusing particularly on the physics performance and discussing the repercussions in terms of physics outcome of light-flavour analysis ($\pi$/K/p and V0s), compared to what reached during LHC Run 2.
The Large Hadron Collider (LHC) recently completed its Run-2 operation period (2015-2018) which delivered an integrated luminosity of 156 fb-1 at the centre-of-mass $pp$ collision energy of 13~TeV. This marked 10 years of successful operation by the ATLAS Semiconductor Tracker (SCT), which operated during Run-2 with instantaneous luminosity and pileup conditions that were far in excess of what the SCT was originally designed to meet. The first significant effects of radiation damage in the SCT were also observed during Run-2. The SCT operations, performance and radiation damage studies were published as a parer [1]. This talk summarises the operational experience, challenges and performance of the SCT during Run-2, and Run 3 operation prospects with a focus on the impact and mitigation of radiation damage effects.
The tracking performance of the ATLAS detector relies critically on its 4-layer
Pixel Detector. As the closest detector component to the interaction point, this detector is subjected to a significant amount of radiation over its lifetime. By the end of the LHC proton-proton collision RUN2 in 2018, the innermost layer IBL, consisting of planar and 3D pixel sensors, had received an integrated fluence of approximately ฮฆ = 9 ร 10**14 1 MeV neq/cm2.
The ATLAS collaboration is continually evaluating the impact of radiation on the Pixel Detector. During the LHC long shutdown 2 LS2 dedicated data taking of cosmic rays have been taken at this purpose. In this talk the key status and performance metrics of the ATLAS Pixel Detector are summarised, and the operational experience and requirements to ensure optimum data quality and data taking efficiency will be described, with special emphasis to radiation damage experience. A quantitative analysis of charge collection, dE/dX, occupancy reduction with integrated luminosity, under-depletion effects, effects of annealing will be presented and discussed, as well as the operational issues and mitigation techniques adopted during the LHC Run2 and the ones foreseen for Run3.
The Compact Muon Solenoid (CMS) a general purpose experiment to explore the physics of the TeV scale in $pp$-collisions provided by the CERN LHC. Muons constitute an important signature of new physics and their detection, triggering, reconstruction and identification is guaranteed by various subdetectors using different detection systems: Drift Tubes (DT) and Resistive Plate Chambers (RPC) in the central region and Cathode Strip Chambers (CSC) and RPC in the endcap. During Run 2 the higher instantaneous luminosity lead to a substantial background in the muon system. In this contribution we will describe the various methods used to measure these backgrounds in the different muon subdetectors, and we will report the observed particle rates. The analysis is based on data collected in 2018 $pp$ collisions at $\sqrt{s} = 13$ TeV with instantaneous luminosities up to $2.2 \times 10^{34}$ cm$^{-2}$s$^{-1}$. Thorough understanding of the background rates provides the base for the upgrade of the muon detectors for the High-Luminosity LHC, where the instantaneous luminosity will reach $5-7.5 \times 10^{34}$ cm$^{-2}$s$^{-1}$, resulting in 140-200 simultaneous $pp$-collisions. We will discuss in detail the origin and characteristics of the background introduced by the $pp$-collisions, we analyze the response of the various detectors and illustrate the dependence of the background on the instantaneous luminosity and the LHC fill scheme. We will show it is possible to estimate the the contribution from long-lived background rates separately from the promptly induced background. Finally we will look forward to the expected background at the High-Luminosity LHC.
The LHCb experiments is in the commissioning phase of an ambitious upgrade project that will allow improved sensitivity to interesting beauty and charm decays with a combination of higher luminosity and the deployment of a purely software trigger. A key element of the trigger is a fast-tracking algorithm based on the vertex detector, and a tracking system located in front of the LHCb magnet, the Upstream Tracker (UT ). This Silicon microstrip detector features finer granularity, larger coverage and a smaller material budget compared to the Tracker Turicensis it replaces. It comprises four layers of silicon sensors, mounted on both sides of carbon fiber structures (staves) that provide mechanical support and embedded CO2 cooling. The charge signals from the sensors are processed by novel front end ASICs that features fast signal processing, digitization, common mode subtraction and zero suppression to meet the requirement of real time data processing . The qualification of the detector and associated electronics is aiming at achieving the desired speed and efficiency in the trigger algorithm . We will discuss the design of the detector as well as its current installation and commissioning status.
Lepton flavor violation in the charged lepton sector (cLFV) is expected to be unobservably small in the Standard Model (SM). On the other hand, many new physics theories predict rates of cLFV near the sensitivity of the current experiments. Hence, this is a very sensitive probe for physics beyond the SM, and the evidence for such new physics would be unambiguous if a positive observation is made. The MEG II experiment is searching for the cLFV decay $\mu \to e \gamma$ with a sensitivity below $10^{โ13}$ on its branching ratio, a factor 10 better than the phase-1 MEG experiment. The construction and commissioning of MEG II have been completed and the first physics data have been collected in 2021. In this talk I will discuss the performances of the experiment, the status of the analysis of 2021 data and the perspectives for the upcoming years. A recent result for the search of the ยต โ eฮณฮณ decay in the dataset of MEG will be also reviewed, and the perspectives for similar exotic searches at MEG II will be briefly discussed.
The Mu3e experiment at the Paul-Scherrer-Institut searches for the charged lepton flavour violating decay $\mu^+\rightarrow e^+e^-e^+$. This decay mode is extremely suppressed in the Standard Model, such that any observation would be a clear signature for new physics being at play. The experiment will be conducted in two phases. In Phase I, a single event sensitivity of $2 \times 10^{-15}$ is projected to be reached using the Compact Muon Beamline present at PSI. To reach the ultimate sensitivity of $10^{-16}$ in Phase II, an upgrade of the detector as well as a higher intensity muon beamline will be required.
The detector system has to provide excellent tracking efficiency as well as momentum, vertex and time resolutions to reach the experimental goals. An unprecedentedly thin silicon pixel tracking detector using HV-MAPS, ultra-light services and a gaseous Helium cooling system is being constructed. It is complemented by timing detectors consisting of scintillating fibres and tiles. The full detector is placed inside a solenoidal magnetic field of 1 T.
A first run integrating several subdetector prototypes was successfully conducted at PSI in 2021, with another run being planned for this year. While the design of the final detector components is being completed, the Mu3e experiment is entering the production stage. Commissioning of the final Phase I detector is planned to start in 2023. In this talk, the status of the Mu3e experiment will be presented.
Precision measurements of flavour-violating processes are a sensitive tool to search for signals beyond the Standard Model (SM). In this context, the MEG II and Mu3e experiments at the Paul Scherrer Institut (PSI) search for the two muon decays $\mu^+ \to e^+\gamma$ and $\mu^+ \to e^+e^-e^+$, respectively. In addition to their main channels, both experiments appear to be competitive in searching for more exotic processes, in which lepton flavour violation (LFV) occurs in presence of an invisible axion-like particle (ALP), denoted with $X$. A suitable candidate is the two-body decay $\mu^+ \to e^+ X$, whose only signature is a monochromatic positron close to kinematic endpoint of the $\mu^+ \to e^+ \nu_e \bar\nu_\mu$ background. Since the higher-order radiative corrections in this region are greatly enhanced by the emission of soft photons, the experimental hunt for such an elusive signal requires extremely accurate theoretical predictions.
This is one of the problems that led to the development of McMule, acronym of Monte Carlo for MUons and other LEptons. McMule is a generic framework for the numerical computation of fully-differential QED corrections for low-energy processes involving leptons, based on the FKS$^2$ subtraction scheme. In addition to muon and tau decays, the code includes scatterings such as $ee\to ee$, $ee\to \gamma\gamma$ and $ep \to ep$ with next-to-next-to-leading order (NNLO) accuracy, while $e\mu \to e\mu$ and $ee \to \mu\mu$ are currently in development at the same precision.
In this contribution, we focus on the implementation of $\mu \to e X$ and $\mu \to e \nu \bar\nu$ in the McMule framework. In both cases the muons are assumed to be polarised and the electron mass is kept at its physical value. The signal $\mu \to e X$ is computed using an effective field approach, including the QED corrections at next-to-leading order (NLO). The background $\mu \to e \nu \bar\nu$ includes weak corrections at NLO, hadronic contributions at LO, exact QED corrections at NNLO and logarithmically approximated at N$^3$LO and N$^4$LO. Going beyond fixed-order calculations, an analytical resummation of soft emissions is also included. This results in a theoretical error on the positron energy spectrum below $10^{-6}$, the smallest achieved so far for the polarised muon decay.
As a preliminary study, we assume the nominal performances of the MEG II and Mu3e experiments to estimate their sensitivity on $\mu \to e X$ for different masses and couplings of the ALP. The branching ratio of this hypothetical process has been limited to $5.8\cdot10^{-5}$ by the TWIST experiment. We show that MEG II can provide an independent measurement close to this limit, while Mu3e can improve it up to three orders of magnitude. In this regard, the McMule predictions are currently being implemented in the experimental analysis codes for more detailed studies.
The low-background environment of electron-positron collisions along with the large expected sample size and an hermetic detector make Belle II the premier experiment for studying tau-lepton physics. This talk presents recent world-leading physics results from Belle II searches for tau decays into a scalar non-SM particle and a lepton. Perspectives on tests of lepton (flavor) universality and other searches for non-SM physics in tau decays are also outlined.
We report a partial wave analysis conducted in $\tau^{-} \to \pi^{-}\pi^{-}\pi^{+}\nu$ decays, where it is expected that the new resonance $\text{a}_1(1420)$ that has been observed by COMPASS experiment can be studied as well as which provides the detailed study of $\text{a}_1(1260)$. In addition, we report the searches for New Physics in $\tau$ decays, especially with its decay involving heavy neutral lepton. We also cover recent searches for lepton-flavor-violating decays of the tau lepton and a measurement of its electric dipole moment. The results are based on the full data collected with the Belle detector at the KEKB asymmetric-energy $e^{+}e^{-}$ collider.
The spontaneous muonium to antimuonium conversion is one of the interesting
charged lepton flavor violation processes. It serves as a clear indication of
new physics and plays an important role in constraining the parameter space
beyond Standard Model. MACE is a proposed experiment to probe such a phenomenon
and expected to enhance the sensitivity to the conversion probability by more
than two orders of magnitude compared with the current best upper constraint obtained by the PSI experiment two decades ago. Recent progress in the conceptual design study will be reported, focusing on the beamline requirements, the high-efficency muonium formation in the target, an optimized design of magnetic spectrometer, the tracking algorithm to discriminate signals and backgrounds.
At BESIII, the R value is measured with a total of 14 data points with the
corresponding c.m. energy going from 2.2324 to 3.6710 GeV.
The statistical uncertainty of the measured R is less than 0.6%. Two different
simulation models, the LUARLW and a new Hybrid generated, are used and give
consistent detection efficiencies and initial-state-radiation corrections. An
accuracy of better than 2.6% below 3.1 GeV and 3.0% above is achieved in the R
values.
We present a study of the process $e^+e^- \to \pi^+\pi^-\pi^0$ at $BABAR$ using the initial-state radiation technique. The analysis is based on the full $BABAR$ data set, 469 fb$^{-1}$, recorded at and near the $\Upsilon(4{\mathrm{S}})$ resonance. From the fit to the measured $3\pi$ mass spectrum we determine the products $\Gamma(V\to e^+e^-){\cal{B}}(V\to 3\pi)$ for the omega and phi resonances, and ${\cal{B}}(\rho\to 3\pi)$. The latter isospin-breaking decay is observed with 6 sigma significance. The $e^+e^- \to \pi^+\pi^-\pi^0$ cross section is measured from 0.62 GeV to 3.5 GeV. The measured cross section is used to calculate the leading-order hadronic contribution to the muon magnetic anomaly fromย this exclusive final state withย improved accuracy.
We present new lattice results of the ETM Collaboration, obtained from extensive simulations of lattice QCD with dynamical up, down, strange and charm quarks at physical mass values, different volumes and lattice spacings, concerning the SM prediction for the so-called intermediate window (W) and short-distance (SD) contributions to the leading order hadronic vacuum polarization (LOHVP) term of the muon anomalous magnetic moment, $a_\mu$. Results for $a_{\mu,LOHVP}^W$ and $a_{\mu,LOHVP}^{SD}$, besides representing a step forward to a complete lattice computation of $a_{\mu,LOHVP}$ and a useful benchmark among lattice groups, are compared here to their dispersive counterparts based on experimental data for $e^+e^-$ into hadrons. The comparison confirms the tension in $a_{\mu,LOHVP}^W$, already noted in 2020 by the BMW Collaboration, while showing no tension in $a_{\mu,LOHVP}^{SD}$.
The anomalous magnetic moment of the muon $a_\mu = (g-2)_\mu/2$ has been measured at the Brookhaven National Laboratory in 2001 and recently at the Fermilab Muon $g - 2$ Experiment. The results deviate by 4.2 $\sigma$ from the Standard Model predictions, where the most dominant source of theoretical error comes from the Hadronic Leading Order (HLO) contribution $a_\mu^{\mathrm{HLO}}$. MUonE is a proposed experiment at CERN whose purpose is to provide a new and independent determination of $a_\mu^{\mathrm{HLO}}$ via elastic muon-electron scattering at low momentum transfer. To achieve a precision that is comparable to the standard timelike estimation of $a_\mu^{\mathrm{HLO}}$, the experiment must reach an accuracy of about 10 parts per million on the differential cross section. This requires a similar level of accuracy also from the theoretical point of view: a precise calculation of the muon-electron scattering cross section with all the relevant radiative corrections as well as quantitative estimates of all possible background processes are needed. In this talk the theoretical formulation for the NNLO photonic corrections as well as NNLO real and virtual lepton pair contributions are described and numerical results obtained with a Monte Carlo event generator are presented. These contributions are crucial to reach the precision aim of MUonE.
The latest measurement of the muon g-2 announced at Fermilab exhibits a 4.2$\sigma$ discrepancy from the currently accepted Standard Model prediction. The main source of uncertainty on the theoretical value is represented by the leading order hadronic contribution $a_{\mu}^{HLO}$, which is traditionally determined through a data-driven dispersive approach. A recent calculation of $a_{\mu}^{HLO}$ based on lattice QCD is in tension with the dispersive evaluation, and reduces the discrepancy between theory and experiment to 1.5$\sigma$. An independent evaluation of $a_{\mu}^{HLO}$ is therefore required to solve this tension and consolidate the theoretical prediction.
The MUonE experiment proposes a novel approach to determine $a_{\mu}^{HLO}$ by measuring the running of the electromagnetic coupling constant in the space-like region, via $\mu-e$ elastic scattering. The measurement will be performed by scattering a 160 GeV muon beam, currently available at CERN's North Area, on the atomic electrons of a low-Z target. A Test Run on a reduced detector is planned to validate this proposal. The status of the experiment in view of the Test Run and the future plans will be presented.
The leading-order (LO) hadronic vacuum polarization (HVP) contribution to the muon $g$-$2$, $a_{\mu}^{\rm HVP}(\rm LO)$, is traditionally computed via dispersive "time-like" integrals using measurements of the hadronic production cross-section in $e^{-}e^{+}$ annihilations. An alternative method is provided by lattice QCD. At LO, simple "space-like" formulas are well-known and form the basis for the lattice QCD calculation as well as for the determination of $a_{\mu}^{\rm HVP}(\rm LO)$ expected from the proposed MUonE experiment at CERN.
In this talk, we describe the results of a joint work of Elisa Balzani, Stefano Laporta, and Massimo Passera (see arXiv:2112.05704). We present simple exact analytic formulas for the space-like determination of $a_{\mu}^{\rm HVP}$ at next-to-leading order (NLO) and a mix of exact and approximated formulas at next-to-next-to-leading order (NNLO).
First, we review the well-known simple space-like integral formula for the LO contribution $a_{\mu}^{\rm HVP}(\rm LO)$. Then, we consider the NLO contributions. Using the results of Barbieri and Remiddi (1975), we obtain the exact time-like integral formulas for the HVP contributions of all three classes of NLO diagrams. In particular we get the exact two-loop space-like kernel ${\kappa}^{(4)}(x)$. We analyze also some approximations to ${\kappa}^{(4)}(x)$ considered in the past in the literature, and show that these approximations give rise to an error of $\sim 6 \%$ of the total NLO contribution. This error can be eliminated using our exact expression for ${\kappa}^{(4)}(x)$.
At last, we consider the NNLO contributions. For the diagrams composed of one- or two-loop vertices and one or more HVP insertions on the same photon line, we are able to get exact space-like integral formulas. For the diagrams containing actual three-loop vertices, like e.g. light-light diagrams, by using the large-$s$ expansions of the time-like kernels provided in the literature, we are able to find very good approximations of the space-like kernels.
In the case of the problematic class of diagrams containing a two-loop vertex and HVP insertions on both photon lines, bidimensional space-like kernels are required. For this class of diagrams, we are able to get a good approximated bidimensional space-like kernel from the bidimensional expansions of the time-like one.
The results presented in this talk can be employed in lattice QCD calculations of $a_{\mu}^{\rm HVP}$, as well as in space-like determinations based on scattering data, like that expected from the proposed MUonE experiment at CERN. They will allow precise and consistent comparisons, through NNLO, with results already obtained via time-like data.
The energy dependency (running) of the strength of electromagnetic interactions $\alpha$ and of the mixing with weak interactions $\sin^2\theta_{\mathrm{W}}$ plays an important role in precision tests of the SM. The running of the former to the $Z$ pole is an input quantity for global electroweak fits, while the running of the mixing angle is susceptible to the effects of BSM physics, particularly at low energies.
We present a computation of the hadronic vacuum polarization (HVP) contribution to the running of these electroweak couplings at the non-perturbative level in lattice QCD, in the space-like regime up to $Q^2$ momentum transfers of $7\,\mathrm{GeV}^2$. This quantity is also closely related to the HVP contribution to the muon $g-2$.
We observe a tension of up to $3.5$ standard deviation between our lattice results for $\Delta\alpha^{(5)}_{\mathrm{had}}(-Q^2)$ and estimates based on the $R$-ratio for $Q^2$ in the $3$ to $7\,\mathrm{GeV}^2$ range. The tension is, however, strongly diminished when translating our result to the $Z$ pole, by employing the Euclidean split technique and perturbative QCD, which yields $\Delta\alpha^{(5)}_{\mathrm{had}}(M_Z^2)=0.027\,73(15)$. This value agrees with results based on the $R$-ratio within the quoted uncertainties, and can be used in alternative to the latter in global electroweak fits.
Moreover, the ability to perform an exact flavor decomposition allows us to present the most precise determination to date of the $\mathrm{SU}(3)$-flavor-suppressed HVP function that enters the running of $\sin^2\theta_{\mathrm{W}}$.
The Standard Model (SM) of particle physics is a very successful theory that can explain the fundamental particles and their interactions. However, there are theoretical, and experimental motivations of studying physics Beyond Standard Model. I will discuss the possibility of probing Beyond Standard Model physics through light particles like axions and light gauge bosons. We also obtain bounds on the properties of these particles from astrophysical experiments. The constraints on ultralight axions are derived from indirect evidence of gravitational waves, birefringence phenomena, gravitational light bending, and Shapiro time delay. These ultralight axions can also be a promising candidate for dark matter. We also explore the contribution from ultralight gauge bosons to the orbital period loss of compact binary systems, and perihelion precession of planets and obtain constraints on the mass and coupling strength of these particles. Such types of observations can also constrain several particle physics models and will be discussed.
Flavor violating axion couplings can be in action before recombination, and they can fill the early universe with an additional radiation component. Working within a model-independent framework, we consider an effective field theory for the axion field and quantify axion production. Current cosmological data exclude already a fraction of the available parameter space, and the bounds will improve significantly with future CMB-S4 surveys. Remarkably, we find that future cosmological bounds will be comparable or even stronger than the ones obtained in our terrestrial laboratories.
The International Axion Observatory (IAXO) is a large-scale axion helioscope that will look for axions and axion-like particles (ALPs) produced in the Sun. It is conceived to reach a sensitivity on the axion photon coupling in the range of 10$^{-12}$ GeV$^{-1}$.
On the way to IAXO, an intermediate experiment babyIAXO is already in the construction phase. BabyIAXO will be important to test all IAXO subsystems (magnet, optics and detectors) and at the same time, as a fully-fledged helioscope, will reach a sensitivity on the axion-photon coupling of 1.5 $\times$ 10$^{-11}$ GeV$^{-1}$ for masses up to 0.25 eV, covering a very interesting region of the parameter space.
Important milestones have been reached in the past years in the development of the different components of the experiment as low background x-ray detectors and x-ray optics as well as for the design of the large magnet hosting two 10 m long bores with a diameter of 0.7 m for axion to photon conversion. The design of the mechanical infrastructures allowing for a Sun monitoring during half of a day has been defined.
We report on the recent characterization of babyIAXO subsystems and discuss how the achieved results compare to the requirements. We finally discuss the schedule for the construction of the babyIAXO helioscope.
Axion, a hypothetical particle originally emerging from a proposed solution to the strong CP problem of particle physics, is one of the most favored candidates addressing the dark matter puzzle. As part of the efforts within the Center for Axion and Precision Physics Research (CAPP) of the Institute for Basic Science (IBS), we are searching for axion dark matter using the haloscope method sensitive to masses around 24.5 ยตeV at KSVZ sensitivity. A unique 8-cell cavity, used for the first time in search of KSVZ axions, is cooled down to 40 mK within a background magnetic field of 8 T. The expected axion signal resonating with the TM010-like mode of the cavity is picked up using an antenna and transferred to the readout chain. Implementing a flux-pumped Josephson Parametric Amplifier with 20 dB gain at the first stage of amplification, the background represented as a system noise temperature was estimated to be 450 mK, corresponding to 1.6 photons. In this talk, we present our results from the physics data taken since December 2021, covering approximately 100 MHz.
The multiple-cell cavity design, developed at IBS-CAPP, was successfully demonstrated, by conducting an axion experiment using a double-cell cavity, as an efficient approach for high-mass axion searches. Using cavities with higher cell-multiplicities, we are currently running parallel experiments for axion searches near 6 GHz and 7 GHz with KSVZ sensitivity relying on dilution refrigerators with superconducting magnets and quantum noise limited amplifiers. We also design meta-material cavities of dielectric rods with a practical frequency tuning mechanism, which is expected to be suitable for even higher-mass axions. In addition, an attempt to develop a microwave photon detector based on the Rydberg atom technology is made in order to improve signal detection performance at high masses. We discuss the experimental strategies for high-mass axion searches at IBS-CAPP.
The BREAD Collaboration proposes an ambitious program of broadband searches for terahertz axion dark matter. Its experimental hallmark is a cylindrical metal barrel converting axions to photons that are focused by a parabolic reflector to an ultralow noise photosensor. Practically, this novel dish antenna geometry enables enclosure inside standard cryostats and high-field magnets. We present an overview of the BREAD conceptual design and science program from dark photon pilot planned at Fermilab to large-scale experiment. BREAD is projected to open multiple decades of unexplored coupling sensitivity across the meV to eV mass range that has long eluded existing resonant-cavity haloscopes. Based on Phys. Rev. Lett. 128 (2022) 131801
The MAgnetized Disc And Mirror Axion eXperiment (MADMAX) intends to search for dark matter axions in the mass range of 40 to 400 ฮผeV, a range previously inaccessible by other experiments. This mass range is favored by models in which the Peccei-Quinn symmetry is broken after cosmic inflation. MADMAX will apply the concept of the dielectric haloscope, multiple movable dielectric disks in a strong magnetic field. The experiment will be located at DESY Hamburg and is currently entering its prototype phase.
In this presentation, the concept of MADMAX will be presented, both in respect to the simulated sensitivity of the experiment as well as laboratory based setups demonstrating the feasibility of the experiment. Results from a small scale closed system will be presented, showing good agreement between simulation and measurements. Also the advanced design of the planned prototype of the experiment will be discussed.
The latest results of searches for supersymmetry in hadronic and photonic final states with the CMS experiment will be presented. The analyses are based on the full dataset of proton-proton collisions collected during the Run 2 of the LHC at a center-of-mass energy of 13 TeV.
Supersymmetry (SUSY) provides elegant solutions to several problems in the Standard Model, and searches for SUSY particles are an important component of the LHC physics program. Naturalness arguments for weak-scale supersymmetry favour supersymmetric partners of the gluons and third generation quarks with masses light enough to be produced at the LHC. This talk will present the latest results of searches conducted by the ATLAS experiment which target gluino and squark production, including stop and sbottom, in a variety of decay modes. It covers both R-parity conserving models that predict dark matter candidates and R-parity violating models that typically lead to high-multiplicity final states without large missing transverse momentum.
The direct pair-production of the tau-lepton superpartner, stau, is one of the most interesting channels to search for SUSY. First of all the stau is with high probability the lightest of the scalar leptons. Secondly the signature of stau pair production signal events is one of the most difficult ones, yielding to the 'worst' and so most global scenario for the searches. The current model-independent stau limits comes from analysis performed at LEP but they suffer from the low energy of this facility. The LHC exclusion reach extends to higher masses for large mass differences, but under strong model assumptions.
The ILC, a future electron-positron collider with energy up to 1 TeV, is ideally suited for SUSY searches. The capability of the ILC for determining exclusion/discovery limits for the stau in a model-independent way is shown in this contribution, together with an overview of the current state-of-the-art. A detailed study of the 'worst' scenario for stau exclusion/discovery, taking into account the effect of the stau mixing on stau production cross-section and efficiency is presented. For selected benchmarks, the prospect for measuring masses and polarised cross-sections will be shown. The studies were done studying events passed through the full detector simulation and reconstruction procedures of the International Large Detector concept (ILD) at the ILC. The simulation included all SM backgrounds, as well as the machine induced ones.
Results from the CMS experiment are presented for searches for supersymmetric partners of the top and bottom quarks and tau leptons. A wide range of final state decays are considered in order to maximize sensitivities to different possible supersymmetric particle spectra. The searches use proton-proton collision data with luminosity up to 138 fb-1 recorded by the CMS detector at center of mass energy 13 TeV during the LHC Run 2.
The latest results from combinations of multiple searches targeting the electroweak production of supersymmetric particles and top squarks will be presented. The analyses are based on the full dataset of proton-proton collisions collected during the Run 2 of the LHC at a center-of-mass energy of 13 TeV.
The ALICE Collaboration has just finished a major detector upgrade which increases the data-taking rate capability by two orders of magnitude and will allow to collect unprecedented data samples. For example, the analysis input for 1 month of Pb-Pb collisions amounts to about 5 PB. In order to enable analysis on such large data samples, the ALICE distributed infrastructure was revised and dedicated tools for Run 3 analysis were created. These are firstly the O2 analysis framework which builds on a multi-process architecture exchanging a flat data format through shared memory implemented in C++. Secondly, the Hyperloop train system for distributed analysis on the Grid and on dedicated analysis facilities implemented in Java/Javascript/React. These systems have been commissioned with converted Run 2 data and with the recent LHC pilot beam and are ready for data analysis for the start of Run 3. The talk will discuss the requirements and the used concepts, providing details on the actual implementation. The status of the operation in particular with the LHC pilot beam will also be discussed.
The sPHENIX detector is a next generation experiment being constructed at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. Starting next year it will collect high statistics data sets from ultra relativistic Au+Au, p+p and p+Au collisions. The readout is a combination of triggered readout for calorimeters and streaming readout for the silicon pixel/strip detectors and the time projection chamber (TPC). sPHENIX does not employ higher level triggers only a small subset of events is build online for monitoring purposes which makes it unique among NP/HEP experiments. Events are assembled from multiple input streams as part of a multi pass reconstruction which includes calibration and space charge distortion corrections for the TPC data. This reconstruction will run near realtime within a fixed latency of when the data was taken. To meet its physics requirements sPHENIX has developed state of the art reconstruction software based on the "A Common Tracking Software" (ACTS) package which was adapted to reconstruct the TPC data. The raw data will be processed at the Tier 0 for the RHIC experiments - the Scientific Data Computing Center (SDCC) at BNL. The Production and Distributed Analysis (PanDA) system was chosen as workload management system to handle the complexities of our workflow.
In this talk the details of the data processing for the sPHENIX experiment will be described.
The intelligent Data Delivery Service (iDDS) has been developed to cope with the huge increase
of computing and storage resource usage in the coming LHC data taking. It has been designed
to intelligently orchestrate workflow and data management systems, decoupling data
pre-processing, delivery, and primary processing in large scale workflows. It is an experiment-agnostic service that has been deployed to serve data carousel (orchestrating efficient processing of tape-resident data), ML hyperparameter optimization, active learning, and other complex multi-stage workflows defined via DAG, CWL and other descriptions, including a growing number of analysis workflows. We will present the motivation for iDDS, its architecture, use cases and the status of its production use in ATLAS and the Rubin Observatory, together with plans for the future.
The Muon $g-2$ Experiment at Fermilab aims to measure the muon anomalous magnetic moment with the unprecedented precision of 140 parts-per-billion (ppb). In April 2021 the collaboration published the first measurement, relative to the first year of data taking. The result confirmed the previous experiment at Brookhaven National Laboratory (BNL), and increased the long-standing tension with the Standard Model prediction to 4.2 $\sigma$. By July 2022 the experiment is expected to conclude the final acquisition of positive muon data, with a total of $\sim$18 times the statistics of the BNL experiment. A collaboration-wide effort is now in place to help producing the multi-petabyte-sized data sets, a challenge typically faced by much bigger experiments. Having a quick production turnaround time is of critical importance in order to achieve the planned publication schedule. In this talk, I will describe the Muon $g-2$ production workflow, the former and current challenges, the tools involved, and the future prospects.
The LHCb experiment has undergone a comprehensive upgrade in preparation for data taking in 2022 and beyond. The offline computing model has been completely redesigned in order to process the much higher data volumes originating from the detector and the associated demands of simulated samples of ever-increasing size. This contribution presents the evolution of the data processing model with a focus on the various applications that have been developed to prepare LHC Run 3 data for analysis, from centralised processing to user data analysis.
Cerenkov Differential counters with Achromatic Ring focus (CEDARs) in the COMPASS experiment beamline were designed to identify particles in limited intensity beams with divergence below 65ฮผrad. However, in the 2018 data taking, a beam with a 15 times higher intensity and a beam divergence of up to 300ฮผrad was used, hence the standard data analysis method could not be used. A machine learning approach using neural networks was developed and examined on multiple Monte Carlo simulations. Different types of network were tested and their configurations optimized using a genetic algorithm with the best performing model being integrated into the current data analysis software written in C++.
The Circular Electron Positron Collider (CEPC) has been proposed as a Higgs/Z factory in China. The baseline design of a detector concept consists of a tracking system, which is a high precision (about 100 $\mu$m) large volume Time Projection Chamber as the main track device. The tracking system has high precision performance requirements, but without power-pulsing, which leads to additional constraints on detector specifications, especially for the case of the machine operating at Z-pole (91 GeV) with higher luminosity. The CEPC TPC requires longitudinal (z) time resolution of about 100 ns and the physics goals require dE/dx resolution of very good particle identification separation with cluster counting to be considered. A number of tasks are still remaining regarding the TPC research. Such tasks include the full simulations of the TPC performance in this background environment, further design of the low power consumption readout electronics, UV laser calibration methods and cooling options.
In this talk, a Nd:YAG Ultraviolet laser with a wave length of 266 nm has been used to study the TPC prototype for the future circular e$^+$e$^-$ collider at Institute of High Energy Physics, CAS (IHEP). A smaller prototype TPC has been setup with a drift length of 500 mm. UV laser is coupled directly via mirrors into the setup and not via a fiber. To keep the laser tracks stable, the setup has to be stabilized against vibrations. It is placed on an anti-vibration pneumatic optical platform, where a central spring, a pendulum bar and an auto inflation system damp any vibration down to amplitudes of less than 1$\mu$m. The TPC detector, gaseous chamber, high voltage field-cage, FEE electronics and DAQ with 1280 channels readouts have been developed and measured at IHEP. Some update results of the commission with the low power consumption FEE ASIC readout (<5$mW/ch$), the spatial resolution (<100$\mu$m@$\sigma_x$), the gain, the laser track reconstruction and $dE/dx$ the will be reported.
A large, worldwide community of physicists is working to realise an exceptional physics program of energy-frontier, electron-positron collisions with the International Linear Collider (ILC). The International Large Detector (ILD) is one of the proposed detector concepts at the ILC. The ILD tracking system consists of a Si vertex detector, forward tracking disks and a large volume Time Projection Chamber (TPC) embedded in a 3.5 T solenoidal field. The TPC is designed to provide up to 220 three dimensional points for continuous tracking with a single-hit resolution better than 100 ฮผm in rฯ, and about 1 mm in z. An extensive research and development program for a TPC has been carried out within the framework of the LCTPC collaboration. A Large Prototype TPC in a 1 T magnetic field, which allows to accommodate up to seven identical Micropattern Gaseous Detector (MPGD) readout modules of the near-final proposed design for ILD, has been built as a demonstrator at the 5 GeV electron test-beam at DESY. Three MPGD concepts are being developed for the TPC: Gas Electron Multiplier, Micromegas and GridPix. Successful test beam campaigns with different technologies have been carried out in 2018-2021. Fundamental parameters such as transverse and longitudinal spatial resolution and drift velocity have been measured. In parallel, a new gating device based on large-aperture GEMs have been produced and studied in the laboratory. Recent R&D also led to a design of a Micromegas module with monolithic cooling plate in 3D printing and 2-phase CO2 cooling. In this talk, we will review the track reconstruction performance results and summarize the next steps towards the TPC construction for the ILD detector.
The $\mu$-RWELL is a single amplification stage resistive MPGD. The amplification stage is realized with a copper-clad polyimide foil patterned with a micro-well matrix coupled with the readout PCB through a DLC resistive film (10รท100 M$\Omega$/square).
The detector is proposed for several applications in HEP that require fast and efficient triggering in harsh environment (LHCb muon-upgrade), low mass fine tracking (FCC-ee, CepC, SCTF) or high granularity imaging for hadron calorimeter applications (Muon collider).
For the phase-2 upgrade of the LHCb experiment, proposed for LHC Run-5, the excellent performance of the current muon detector will need to be maintained at 40 times pile-up level experienced during Run-2. Requirements are challenging for the innermost regions of the muon stations, where detectors with rate capability of few MHz/cm$^2$ and capable to stand an integrated charge up to $\sim$10 C/cm$^2$ are needed.
In this framework an intense optimization program of the $\mu$-RWELL has been launched in the last year, together with a technology transfer to the industry operating in the PCB field.
In order to fulfill the requirements, a new layout of the detector with a very dense current evacuation grid of the DLC has been designed.
The detector, co-produced by the CERN-EP-DT-MPT Workshop and the ELTOS Company, has been characterized in terms of rate capability exploiting a high intensity 5.9 keV X-ray gun with a spot size (10$\div$50 mm diameter) larger than the DLC grounding-pitch. A rate capability exceeding 10 MHz/cm$^2$ has been achieved, in agreement with previous results obtained with m.i.p. at PSI.
A long term stability test is in ongoing: a charge of about 100 mC/cm$^2$ has been integrated over a period of about 80 days. The test will continue with the goal to integrate about 1 C/cm$^2$ in one year, while a slice test of the detector is under preparation.
In view of the construction of a circular e+e- collider, like FCC_ee, the scientific community of RD_FCC is conceiving the IDEA apparatus: the Innovative Detector for Electron-positron Accelerator.
The detector is composed, from the innermost region going outward, of a central tracker, the magnet, the pre-shower, the calorimeter and the muon system.
The micro-Resistive WELL technology has been proposed for the realization of the pre-shower and the muon counters with a proper tuning of the detector parameters due to different requirements of the two systems. In particular the readout strip pitch will be 400 $\mu$m for the pre-shower and 1 mm for the muon stations. This is possible even thanks to the industrialization of the production process started to make the technology cost-effective. A key-role in this task is represented by the choice to make the apparatus systems modular.
Other requirements to the detectors are: a spatial resolution of the order of 100 $\mu$m for the pre-shower and a reasonable total number of front-end channels for the muon system.
The optimization of the surface resistivity and of the strip pitch passed by the construction of 2 sets of prototypes, each made of 5 detectors for the pre-shower and 3 detectors for the muon, with active area of 16x40 cm^{2} and 40 cm long strips. For the pre-shower prototypes the resistive stage has been chosen with a surface resistivity ฯs ranging from 10 to 200 MOhm/square, while for the muon ones $\rho_{S}$ is about 20 MOhm/square. All these detectors have been exposed in October 2021 to a muon/pion beam at the CERN SPS. The very positive results obtained open the way for a completely new and competitive MPDG tracking device for high energy physics experiments. Preliminary results on a long detector stability measurement will be also presented.
The future of HEP experiments foresee new upgrades of the current accelerators (HL-LHC) and the design of high energy and very high intensity new particle accelerators (FCC-ee/hh, EIC, Muon Collider). This opens new challenges to develop cost effective, high efficiency particle detectors operating in high background and high radiation environment.
An R&D project is ongoing in order to consolidate the MPGD technology for particle fluxes up to 10 MHz / cm$^2$ with a high-granularity low occupancy readout on pads with a dimension of the order of few mm$^2$. The radiation hardness and the feasibility to build large scale detectors for future high energy physics experiments are also part of the main objectives of the project.
Various prototypes of small-pad resistive Micromegas with different configuration and construction techniques have been built, tested and characterized. In particular the resistive schemes explored are either based on embedded resistors or using uniform Diamond-Like Carbon (DLC) resistive foils. The most recent results in terms of rate capabilities, gain, energy, space and time resolutions will be presented.
A long-term irradiation and longevity test was conducted on two bulk-Micromegas detectors with screen-printed resistive strips, working with Ar:CO$_2$ gas mixture at the CERN GIF++ facility between 2015 and 2018.
The results have been presented at previous conferences and are under publications. In that test the detectors have integrated a total charge of about 0.3 C/cm$^2$.
One of the detectors irradiated at GIF++ is currently undergoing an ageing test with X-rays from Cu, this time with an Ar:CO$_2$:iC$_4$H$_{10}$ mixture to study the effect of hydrocarbons like isobutane to the detector longevity.
The resistive Micromegas under test has up to now accumulated a total charge exceeding 10 C/cm$^2$, corresponding to about 100 years of equivalent irradiation of Muon detectors in the High Luminosity era of LHC and more than 20 at future colliders like FCC-hh.
The detectors is constantly irradiated with an X-rays beam of variable intensity and the current, the gas temperature and humidity as well as the pressure and the environmental parameters are continuously measured and registered. A second detector with identical construction and characteristics, operated at the same voltages and with the same gas but not irradiated, is used as reference chamber. Charge spectra with an $^{55}$Fe source are acquired at regular intervals for both detectors to monitor the evolution of the gain and energy resolution.
The test is still continuing and will be followed by a deep inspection of the detector at the end of the irradiation period.
The talk will describe the experimental setup and test operation, and will focus on the main results and their interpretation. In particular we observe that the energy resolution stays largely unchanged, while the gain slowly reduces as the accumulated charge increases, an effect never measured before on resistive Micromegas.
A special readout chain was developed for the data acquisition of an innovative cylindrical gas-electron multiplier (CGEM) [1], which is being built to replace the inner drift chamber of the BESIII [2] experiment.
The whole system [3] was designed with modularity, versatility and scalability in mind and can be used to test other innovative micro-pattern gaseous detectors.
Signals from the detector strips are processed by TIGER, a custom 64-channel ASIC that provides an analog charge readout via a fully digital output. TIGER continuously transmits data over thresholds in triggerless mode, has a linear charge readout up to about 50 fC, less than 3ns jitter, and a power dissipation of less than 12 mW per channel.
The ASICs can operate in Sample-and-Hold (SH) and Time-Over-Threshold (TOT) modes. In SH mode, the output of the discriminator connected to the fast shaper provides a trigger to the control logic, which generates a sampling pulse with a delay suitable to capture the slow shaper output around its maximum. The delay between the trigger and the sampling pulse can be fine-tuned. In TOT mode, the leading and trailing edges of the discriminator are sensed by the TDCs. Thus, the charge is derived from the measured pulse duration. The ToT readout allows the charge sensitivity to be extended beyond the saturation point of the front-end amplifier. In principle, either discriminator can be chosen, although the one following the fast shaper provides more accurate timing information and is therefore the default choice.
An FPGA-based off-detector module (GEMROC) was developed specifically to interface with TIGER. The module configures the ASICs and organizes the incoming data by assembling the event packets when the trigger arrives. The GEMROC can be operated in the mode without trigger or with trigger. In the first mode, the data received from the activated TIGERs is merged and transmitted through the Ethernet output port using the UDP protocol. In the second mode, incoming data from each TIGER pair is stored in a latency buffer circular buffer organized in pages until a signal trigger is received.
Control software was developed to characterize, debug, and test the system prior to installation. This software (Graphical User Frontend Interface, GUFI), written in Python, controls the electronic operations and acquisitions in real time. The software is coded with an object-oriented approach and structured in classes to be easily scalable and extensible. Data, along with environmental and detector metrics, are continuously collected in a database and queried via a GRAFANA [4] dashboard to ensure operational reliability and verify data quality.
Fast analysis software was also developed to provide rapid, real-time feedback on data collection. CIVETTA (Complete Interactive VErsatile Test Tool Analysis), written in Python, is fully parallelized at the subrun level to use all CPUs on the machine and integrates all steps to obtain complete metrics on detector performance.
The entire readout system was tested during a test beam at CERN in July 2021.
This presentation will introduce the TIGER /GEMROC system, with particular attention to the test beam results and an outlook on its use for other innovative micro-pattern detectors.
[1] M. Alexeev et al, Triple GEM performance in magnetic field, JINST 14, P08018, DOI:10.1088/1748-0221/14/08/P08018 (2019).
[2] [1] M. Ablikim et al., Design and construction of the BESIII detector, NIM A 614, 345 (2010).
[3] A. Amoroso et al., The CGEM-IT readout chain, JINST 16, P08065, DOI:10.1088/1748-0221/16/08/P08065 (2021).
[4] Grafana Labs, Grafana, https://grafana.com/
The International Particle Physics Outreach Group (IPPOG) is a network of scientists, science educators and communication specialists working across the globe in informal science education and public engagement. The primary methodology adopted by IPPOG employs the direct involvement of scientists active in current research with education and communication specialists, in order to effectively develop and share best practices in outreach. IPPOG member activities include the International Particle Physics Masterclass programme, International Day of Women and Girls in Science, Worldwide Data Day, International Muon Week and International Cosmic Day organisation, as well as participation in activities ranging from public talks, festivals, exhibitions, teacher training, student competitions, and open days at local institutions. These independent activities, often carried out in a variety of languages to public with a variety of backgrounds, all serve to gain the public trust and to improve worldwide understanding and support of science. We present our vision of IPPOG as a key component of particle physics, fundamental research and critical thought around the world.
Since 1984 the Italian groups of the Istituto Nazionale di Fisica Nucleare (INFN) and Italian Universities, collaborating with the DOE laboratory of Fermilab (US) have been running a two-month summer training program for Italian university students. While in the first year the program involved only four physics students of the University of Pisa, in the following years it was extended to engineering students. This extension was very successful and the engineering students have been since then extremely well accepted by the Fermilab Technical, Accelerator and Scientific Computing Division groups. Over the many years of its existence, this program has proven to be the most effective way to engage new students in Fermilab endeavors. Many students have extended their collaboration with Fermilab with their Master Thesis and PhD.
Since 2004 the program has been supported in part by DOE in the frame of an exchange agreement with INFN. An additional agreement for sharing support for engineers of the School of Advanced Studies of S.Anna (SSSA) of Pisa was established in 2007 between SSSA and Fermilab. In the frame of this program four SSSA students are supported each year. Over its 35 years of history, the program has grown in scope and size and has involved more than 500 Italian students from more than 20 Italian Universities, Since the program does not exclude appropriately selected non-italian students, a handful of students of European and non-European Universities were also accepted in the years.
Each intern is supervised by a Fermilab Mentor responsible for performing the training program. Training programs spanned from Tevatron, CMS, Muon (g-2), Mu2e and SBN design and experimental data analysis, development of particle detectors (silicon trackers, calorimeters, drift chambers, neutrino and dark matter detectors), design of electronic and accelerator components, development of infrastructures and software for tera-data handling, research on superconductive elements and on accelerating cavities, theory of particle accelerators.
Since 2010, within an extended program supported by the Italian Space Agency and the Italian National Institute of Astrophysics, a total of 30 students in physics, astrophysics and engineering have been hosted for two months in summer at US space science Research Institutes and laboratories.
In 2015 the University of Pisa included these programs within its own educational programs. Accordingly, Summer School students are enrolled at the University of Pisa for the duration of the internship and are identified and ensured as such. At the end of the internship the students are required to write summary reports on their achievements. After positive evaluation by a University Examining Board, interns are acknowledged 6 ECTS credits for their Diploma Supplement.
Information on student recruiting methods, on training programs of recent years and on final student`s evaluation process at Fermilab and at the University of Pisa will be given in the presentation.
Mexico has participated in CMS masterclasses since 2014 . These masterclasses have grown since then, albeit with some interruption due to the recent pandemic. The authors discuss the experience of CMS masterclasses in Mexico, the practices that enabled them to thrive, and the U.S. - Mexico collaboration that has aided their success. They also examine how students and teachers have benefited along with lessons learned. Finally, they chart CMS masterclasses going forward.
Particle physics is a field which is full of striking visuals: from Feynman diagrams to event displays, there is no shortage of colourful high-contrast shapes and designs to capture the imagination. Can these visuals be used to reach out to budding scientists from their very earliest days? This talk will describe the development of the "Particle Physics for Babies" children's book, a concept imagined by a first-time dad/physicist who wanted to find a way to communicate his physics passion to his newborn daughter. The book was co-developed with the ATLAS outreach team and the International Particle Physics Outreach Group, and has grown to include downloadable captions which allow parents to explain the images in the book to their children or grandchildren with confidence, allowing science to be part of a new child's universe from day 0.
INFN Kids is a national outreach initiative carried out within the INFN's third mission activities. It is dedicated to children of primary and middle schools ages in both formal and informal contexts. It aims at creating engagement and at stimulating curiosity about physics in young children not only in schools, with the support and the mediation of their teachers, but also in everyday life. Different approaches and media are thus exploited: hands-on activities in classrooms and festivals, videos and storytelling on social media, games and comics.
As far as this latter aspect is concerned, we have developed a series of comics with two kids, Leo and Alice, as leading characters. Leo and Alice are very curious and brave, they love technological challenges and adventures. In each of their stories, a common and familiar situation typically turns into an imaginary one when they meet Standard Model particles and discover physical processes. Here we present the first stories of Leo and Alice, discussing the idea which is at the basis of their realization and their potentialities for the engagement of kids, even of lower ages.
The ATLAS Collaboration has developed a variety of printables for education and outreach activities. We present two ATLAS Colouring Books, the ATLAS Fact Sheets, the ATLAS Physics Cheat Sheets, and ATLAS Activity Sheets. These materials are intended to cover key topics of the work done by the ATLAS Collaboration and the physics behind the experiment for a broad audience of all ages and levels of experience. In addition, there is ongoing work in translating these documents to different languages, with one of the colouring books already available in 18 languages. These printables are prepared to complement the information found in all ATLAS digital channels, they are particularly useful in outreach events and in the classroom. We present these resources, our experiences in the creation of them, their use, feedback received, and plans for the future.
In the last two years various existing public outreach activities in ALICE have been adapted for use in an online format, this includes e.g. the well established particle physics masterclasses but also virtual visits to ALICE. In this context an online workshop took place in 2021 with the goal to design a LEGO model of the ALICE detector in minifigure scale (ca. 1:40) and to motivate young people for a long term online collaboration in a particle physics project. The design stage of the model took six months with regular online design sessions accompanied by input on 3D construction, detector technology, the physics questions of ALICE, virtual ALICE visits, and particle physics masterclasses. This stage provided first hand experience on the dynamics of working on different sub-projects in a research collaboration and resulted in a model with more than 16 000 parts that was assembled by the participants in one weekend of an in-person workshop. The experience gained during this real construction has been used further by the young designers for optimisation of the model. The full ALICE model instructions will be released soon for use in further workshops and exhibits. It provides an ideal addition accompanied to the various existing modules for public outreach in ALICE and provided in the framework of the International Particle Physics Outreach Group.
The exact computation of partition functions of four-dimensional theories with extended supersymmetry by means of localization techniques hinges on the existence of a Lagrangian description. On the one hand, it is known that in many cases such a description is only accurate in certain regions of moduli space, such as weak-coupling phases. On the other hand, there are also many examples of non-Lagrangian quantum field theories, which cannot be studied through localization. This raises the question of what replaces the definition of the partition function of a quantum field theory in the broader setting, and how such an object may be computed. I will discuss a geometric definition of instanton partition functions based on the notion of quantum curves associated to certain quantum field theories. I will argue that this definition encompasses the standard one at weak coupling, but also extends to strong coupling, where it is amenable to direct computation.
In this talk we will talk about a large class of solvable lattice models, based on the data of conformal field theory. These models are constructed from any conformal field theory. The talk will be based on the results of a work titled โThe crossing multiplier for solvable lattice modelsโ. We consider the lattice models based on affine algebras described by Jimbo et al., for the affine algebras (A,B, C,D) and by Kuniba et al. for G2. We find a general formula for the crossing multipliers of these models. It is shown that these crossing multipliers are also given by the principally specialized characters of the model in question. Therefore we conjecture that the crossing multipliers in this large class of solvable interaction round the face lattice models are given by the characters of the conformal field theory on which they are based. We use this result to study the local state probabilities of these models and show that they are given by the branching rule, in regime III.
If we start from some functional relations as definition of a quantum integrable theory, then we can derive from them a linear integral equation. It can be extended, by introducing dynamical variables, to become an equation with the form of the Marchenko one. Then, we naturally derive from the latter a classical Lax pair problem. We exemplify our method by focusing on the massive version of the ODE/IM (Ordinary Differential Equations/Integrable Models) correspondence involving the classical sinh-Gordon (ShG) equation with many moduli/masses, as describing super-symmetric gauge theories and the $AdS_3$ strong coupling of scattering amplitudes/Wilson loops. Yet, we present it in a way which reveals its generality of application. In fact, we give some hints on how it works for spin chains.
While colour-kinematics duality and double copy are a well established paradigm at tree level, their loop level generalisation remained for a long time an unsolved problem. Lifting the on-shell, scattering amplitude-based description to an action-based approach, we show that a theory that exhibits tree level colour-kinematics duality can be reformulated in a way such that its loop integrands manifest colour-kinematics duality.
After a review of Batalin-Vilkovisky formalism and homotopy algebras, we discuss how these structures emerge in quantum field theory and gravity. We focus then on the application of these sophisticated mathematical tools to colour-kinematics duality and double copy, introducing an adequate notion of colour-kinematics factorisation.
In this talk, I will show a new connection between quantum integrable models and black holes perturbation theory. After an introduction to quasinormal modes and their role in gravitational waves observations, I will connect their mathematically precise definition with the integrability structures derived from the ordinary differential equation associated to the black hole perturbation. More precisely, I will derive the full system of functional and non linear integral equations (Thermodynamic Bethe Ansatz) typical of quantum integrability and prove that the quasinormal modes verify different equivalent exact quantization conditions. As a consequence, it follows a new simple and effective method to numerically compute quasinormal modes which I will compare with other methods. I will also give a mathematical explanation of the recently found connection between quasinormal modes and N=2 supersymmetric gauge theories, through the further connection we previously found of these to quantum integrable models. All this I will show for a generalization of extremal Reissner-Nordstrรถm (charged) black holes, but in the end I will explain how it should be possible to generalize it to many other black holes, branes, fuzzballs, etc. and thus provide a new effective tool for the study of quantum gravity and gravitational waves.
The renormalization group (RG) beta function describes the running of the renormalized coupling and connects the ultraviolet and infrared regimes of quantum field theories. Performing numerical lattice field theory simulations we use gradient flow measurements to determine the RG $\beta$ function nonperturbatively for SU(3) gauge systems with $N_f$ = 2, 4 and 6 flavors in the fundamental representation. In addition we obtain the anomalous dimension as a function of the running coupling. Surprisingly, both the beta function and the anomalous dimensions follow approximately the 1-loop perturbative predictions.
We present a reinterpretation study of existing results from the CMS Collaboration, specifically, searches for light BSM Higgs pairs produced in the chain decay $pp\to H_{\rm SM}\to hh(AA)$ into a variety of final states, in the context of the CP-conserving 2-Higgs Doublet Model (2HDM) Type-I. Through this, we test the LHC sensitivity to a possible new signature, $pp\to H_{SM}\to ZA\to ZZ h$, with $ZZ\to jj \mu^+\mu^-$ and $h_{SM}\to b\bar b$. We perform a systematic scan over the 2HDM Type-I parameter space, by taking into account all available theoretical and experimental constraints, in order to find a region with a potentially visible signal. We investigate the significance of it through a full Monte Carlo simulation down to the detector level. We show that such a signal is an alternative promising channel to standard four-body searches for light BSM Higgses at the LHC with an integrated luminosity of $L = 300/$fb.
In this talk, I will present several possible anomaly-free implementations of the Branco-Grimus-Lavoura (BGL) model with two Higgs doublets and one singlet scalar. The model also includes three generations of massive neutrinos that get their mass via a type-I seesaw mechanism. A particular anomaly-free realization, which we dub ฮฝBGL-1 scenario, is subjected to a complete analysis, where valid regions in the parameter space are identified, taking into account existent electroweak precision, Higgs and flavour physics observables.
With the discovery of the Higgs boson at the CERN Large Hadron Collider (LHC), the particle spectrum of the Standard Model (SM) is complete. The next target at the energy frontier will be to study the Higgs properties and to search for the next scale beyond the SM. Experimentally, the $H\to c \bar{c}$ channel would be extremely difficult to dig out because of both the weak Yukawa coupling and the daunting SM di-jet background. We propose to test the charm-quark Yukawa coupling at the LHC and future hadron colliders with the Higgs boson decay to $J/\psi$ via the charm-quark fragmentation. Using the non-relativistic quantum chromodynamics (NRQCD), we study the Higgs decay channel $ H \to c \ \bar{c} + J/\psi $(or $ \eta_c $), where both the color-singlet and color-octet contributions are considered. Our result opens another door to improve determinations at the LHC of the Higgs Yukawa couplings: the final state from this decay mode is quite distinctive with $J/\psi\to e^+e^-,\, \mu^+\mu^-$ and the branching fraction is logarithmically enhanced by the charm-quark fragmentation mechanism.
In the quest for new physics (NP), due to the lack of any direct evidence, the framework of Effective field theory (EFT) becomes an indirect and consistent way to parametrise NP effects in terms of higher dimension operators. Among the observables with the potential to account for NP signatures, Electroweak Precision Observables (EWPO) and those from Higgs productions and decays play an important role. In this talk, I will discuss the modifications induced by the Standard Model Effective Field Theory (SMEFT) Warsaw basis dimension-6 operators on different observables related to the electroweak sector. I will present the model-independent constraints obtained from the global fit performed using the EWPO, single and di-Higgs data, as well as distributions from the di-boson production channels. In addition, I will discuss the constraints imposed on the BSM extensions by the considered data via SMEFT matching.
In this presentation, I discuss collider signatures with a focus on new physics scenarios that are predicted in various classes of multi-Higgs doublet models. A thorough analysis of one of these signatures is conducted in the context of the Large Hadron collider, based on a topology with two charged leptons and 4 jets arising from first/second generation chiral quarks. I discuss how the kinematics of the scalar fields can be used to efficiently separate the signal from the dominant backgrounds and its implications in future runs of the LHC.
The ICARUS collaboration employed the 760-ton T600 detector in a successful three-year physics run at the underground LNGS laboratories studying neutrino oscillations with the CNGS neutrino beam from CERN, and searching for atmospheric neutrino interactions. ICARUS performed a sensitive search for LSND-like anomalous ฮฝe appearance in the CNGS beam, which contributed to the constraints on the allowed parameters to a narrow region around 1 eV$^2$, where all the experimental results can be coherently accommodated at 90% C.L. After a significant overhaul at CERN, the T600 detector has been installed at Fermilab. In 2020 cryogenic commissioning began with detector cool down, liquid Argon filling and recirculation. ICARUS has started operations and is presently in its commissioning phase, collecting the first neutrino events from the Booster Neutrino Beam and the NuMI off-axis. The main goal of the first year of ICARUS data taking will then be the definitive verification of the recent claim by NEUTRINO-4 short baseline reactor experiment both in the $\nu_\mu$ channel with the BNB and in the $\nu_e$ with NuMI. After the first year of operations, ICARUS will commence its search for evidence of a sterile neutrino jointly with the SBND near detector, within the Short Baseline Neutrino (SBN) program. The ICARUS exposure to the NuMI beam will also give the possibility for other physics studies such as light dark matter searches and neutrino-Argon cross section measurements. The proposed contribution will address ICARUS achievements, its status and plans for the new run at Fermilab and the ongoing developments of the analysis tools needed to fulfill its physics program.
The goal of the Short Baseline Neutrino (SBN) experiment at Fermilab is to confirm, or definitely rule out, the existence of sterile neutrinos at the eV$^2$ mass scale. SBN searches both for $\nu_e$ appearance and $\nu_\mu$ disappearance signals from the oscillation $\nu_\mu \rightarrow \nu_e$ in the Booster Neutrino Beamline. For this purpose neutrino interactions will be observed by two Liquid Argon TPC detectors at near (100 m) and far (600 m) positions from the neutrino source. The Far Detector (ICARUS T600) is a high granularity uniform self-triggering detector with 3D imaging and calorimetric capabilities allowing to reconstruct ionizing events with complex topology. ICARUS T600 is located at shallow depth, therefore in order to mitigate the background from cosmic muons a Cosmic Ray Tagger (CRT) system ensuring a 4ฯ detector coverage was
installed and integrated in the experiment data acquisition. The CRT system aims at tagging particles with a time resolution of few ns during the 1 ms drift time window of the TPC to disentangle cosmic rays from tracks that originated in an interaction inside the detector. In this talk an overview of the CRT system, its role as a tagger system and its performances will be presented.
STEREO is a segmented, Gd-loaded liquid scintillator calorimeter that studied anti-neutrinos produced by the compact, highly $^{235}$U-enriched reactor core of the Institut Laue-Langevin in Grenoble (France). The experiment ran from 2016 to 2020 and was designed to test the light sterile neutrino explanation of the Reactor Antineutrino Anomaly (RAA) by comparing the neutrino energy spectra recorded by its six detector cells, located between 9 and 11 m away from the centre of the reactor core.
In this talk we present results on the search for short baseline oscillations driven by a sterile state using STEREOโs full dataset. We exclude a large fraction of the RAA favoured region in the $\Delta m^2_{41}โ\sin^2(2\theta_{ee})$ parameter space, including the RAA best-fit point. We also discuss our latest and most precise measurement of the $^{235}U$ reactor antineutrino energy spectrum. We confirm other experiments findings and observe at high confidence level ($> 4 \sigma$) an excess around 5 MeV when comparing to a normalised Huber-Mueller prediction. Finally, we also give our latest measurement of the reactor anti-neutrino flux, among the world's most precise for a HEU reactor.
The SoLid experiment is currently taking physics data close to the BR2 reactor core (SCKยทCEN, Belgium), exploring very short baseline anti-neutrino oscillations. It aims to provide a unique and complementary test of the reactor anti-neutrino anomaly by measuring both anti-neutrino rate and energy spectrum.
The 1.6 tons detector uses an innovative antineutrino detection technique based on a highly segmented target volume made of PVT cubes and LiF:ZnS phosphor screens. The combination of scintillator signals provides a unique signature in space and time to localise and identify the products of the inverse beta decay to face the high background environment imposed by operating at less than 10 meters from the reactor core without significant overburden.
In this contribution we will discuss the technology choices that were made to construct the SoLid experiment, the experience gained from its commissioning, calibration, and the detector performance characteristics during three years of non-stop operation. These years of detailed detector characterisation now allow the usage of more sophisticated reconstruction methods, that take better into account the detector's specificities. They will be presented alongside with a new calibration procedure, where the reconstruction of muons allows to measure all relevant detector parameters and where the energy reconstruction and the energy scale are constrained at all relevant scales by a set of calibration sources and control samples from real data (BiPo, and 12-B) until high energy with muons. The upgrade phase II of the SoLid experiment will be presented too.
Liquid Argon time projection chamber or LArTPC is a scalable, tracking calorimeter that features rich event topology information. It provides the core detector technology for many current and next-gen large scale neutrino experiments, e.g., DUNE and the SBN program. For neutrino experiments, LArTPC faces many challenges in both hardware and software to achieve its optimum performance. On the software side, the main challenge is two-fold. First, deep domain knowledge needs further accumulation. Second, the event degree of freedom is high due to its large scale and uncertainties in the initial neutrino-argon interactions. With LArTPC R&D as one of its main goals, MicroBooNE has made major advancements in the LArTPC reconstruction paradigm building. Multiple fully-automated event reconstruction paradigms have been established. With the publishing of the initial results from a search for an electron neutrino low-energy anomaly, the effectiveness of these reconstruction paradigms are validated with real experiment data. This talk presents the Wire-Cell LArTPC reconstruction paradigm with particular highlights on how conventional and machine learning algorithms benefit from each other and fit into different tasks.
In the canonical seesaw framework flavor mixing and CP violation in weak charged-current interactions of {\it light} and {\it heavy} Majorana neutrinos are correlated with each other and described respectively by the $3\times 3$ matrices $U$ and $R$. We show that the very possibility of $\big|U^{}_{\mu i}\big| = \big|U^{}_{\tau i}\big|$ (for $i = 1, 2, 3$), which is strongly indicated by current neutrino oscillation data, automatically leads to a novel prediction $\big|R^{}_{\mu i}\big| = \big|R^{}_{\tau i}\big|$ (for $i = 1, 2, 3$). We prove that behind these two sets of equalities and the experimental evidence for leptonic CP violation lies a minimal flavor symmetry --- the overall neutrino mass term keeps invariant when the left-handed neutrino fields transform as $\nu^{}_{e \rm L} \to (\nu^{}_{e \rm L})^c$, $\nu^{}_{\mu \rm L} \to (\nu^{}_{\tau \rm L})^c$, $\nu^{}_{\tau \rm L} \to (\nu^{}_{\mu \rm L})^c$ and the right-handed neutrino fields undergo an arbitrary unitary CP transformation. Such a generalized $\mu$-$\tau$ reflection symmetry can greatly help constrain the flavor textures of active and sterile neutrinos and hence enhance predictability and testability of the seesaw mechanism.
In Run 3 the LHCb experiment operates at an instantaneous luminosity a factor five higher compared to the previous runs, with sensitive parts of the upgraded detector as close as 5 mm to the beam. Hence, radiation and background levels should be carefully monitored to protect the experiment from effects ranging from poor data quality to instantaneous damage. To this end, LHCb is equipped with a Radiation Monitoring System (RMS) and a Beam Conditions Monitor (BCM). The RMS is a system of metal foil detectors dedicated to the monitoring of the radiation load and background stability, while the BCM continuously provides the beam permit to the LHC depending on the rate of losses measured every 40 us by two stations of polycrystalline diamond sensors at either side of the interaction point. In this talk, the hardware and software developments of BCM and RMS in preparation for Run 3 of data taking, as well as the performance from the first weeks of operation in the new environment are presented.
Many physics analyses using the Compact Muon Solenoid (CMS) detector at the LHC require accurate, high resolution electron and photon energy measurements. Excellent energy resolution is crucial for studies of Higgs boson decays with electromagnetic particles in the final state, as well as searches for very high mass resonances decaying to energetic photons or electrons. The CMS electromagnetic calorimeter (ECAL) is a fundamental component of these analyses, and its energy resolution is crucial for the Higgs boson mass measurement. It also provides a measurement of the electromagnetic component of jets, and contributes to the measurement of calorimeter energy sums, both of which are important for a wide range of CMS physics analyses.
Recently the energy response of the calorimeter has been precisely calibrated exploiting the full Run 2 (2015-18) dataset, and has been used for legacy reprocessing of the data. A dedicated calibration of each detector channel has been performed with physics events exploiting electrons from W and Z boson decays, photons from pi0/eta decays, and from the azimuthally symmetric energy distribution of minimum bias events. This talk presents the calibration strategies that have been implemented and the improved ECAL performance that has been achieved with the ultimate calibration of Run II data, in terms of energy scale stability and energy resolution. The calibration plans currently being developed to achieve and maintain optimum performance during LHC Run 3 (2022-25) will also be discussed.
The Compact Muon Solenoid (CMS) detector is one of the two multi-purpose experiments at the Large Hadron Collider (LHC) and has a broad physics program. Many aspects of this program depend on our ability to trigger, reconstruction and identify events with final state electrons, positrons, and photons with the CMS detector with excellent efficiency and high resolution.
In this talk we present the full process of electron and photon reconstruction in CMS, starting from tracker hits and energy deposits in the electromagnetic calorimeter, the method to achieve the ultimate precision in Run II energy measurements, the trigger and identification strategies (based both on cut based approach and on multivariate analysis) to discriminate prompt electrons and photons from background, and the methods to estimate the associated systematic uncertainties. Finally the performance on benchmark channels will be shown together with prospects for Run 3.
The ability to identify jets containing b-hadrons (b-jets) is of essential importance for the scientific program of the ATLAS experiment. Cutting-edge machine learning techniques underpin the design of the algorithms used to identify b-jets. Their performance is measured thoroughly in data, for each jet flavour, and used to correct the simulation. The scope of the algorithm and calibration is also expanding to cover more energetic and boosted regimes. In this talk, a summary on the recent developments is given. It presents the state-of-art ATLAS flavour tagging algorithm design and performance, which will be part of the ATLAS Run 3 baseline b-tagger. In addition, new calibration results are shown including the light jet mis-tag rate calibration using Z + jets events and the b-tagging efficiency calibration using multijet events.
The CMS-HF calorimeter uses quartz fibers as active elements to measure the energy of the particles. Since the CMS-HF detector is in a high radiation area, radiation effects decrease the performance of the detector by gradually damaging the active elements. As a consequence, losing transparency in the fibers causes gradual change in the calibration of the detector. Hence, the change in the transparency has to be monitored during the collisions to make corrections in the energy calibration. The online radiation damage monitoring system does this by measuring the ratio of the direct and reflected light pulses in a long fiber in the detector. The existing system was upgraded and commissioned during the last months of the Run II period. In this presentation, the results of the commissioning will be shown and, using these results, the possible ways for the implementation of the system during Run III, especially the implications of the complex behavior of the quartz fibers, will be discussed.
The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment. It provides essential information for reconstructing hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as an absorber and scintillating tiles as the active medium. The light produced by the tiles is transmitted by wavelength shifting fibers to photomultiplier tubes (PMTs). PMT signals are then digitized at 40 MHz and stored on the detector, and are only transferred off detector once the first level trigger acceptance has been confirmed (at a rate of 100 kHz at maximum). The readout is segmented into about 5000 cells (longitudinally and transversally), each being read out by two PMTs in parallel. A set of calibration systems is used to calibrate and monitor the stability and performance of each element of the readout chain during the data taking. The TileCal calibration system includes Cesium radioactive sources, laser, charge injection elements and an integrator-based readout system. Combined information from all systems allows monitoring and equalizing the calorimeter response at each stage of the signal production, from scintillation light to digitization.
A large sample of proton-proton collisions was used to study the detector's performance, including pile-up's influence on the detector noise levels and time resolution. Cosmic ray muons, collision high momentum muons and isolated hadrons were used as probes to study the response of the calorimeter.
This presentation will show the TileCal operation, its performance during Run 2, and its readiness for LHC Run 3 after an extended LHC shutdown.
The muon anomaly, $a_\mu=(g_{\mu}-2)/2$, is a low-energy observable which can be both measured and computed to high precision, making it a sensitive test of the Standard Model and a probe for new physics. The current discrepancy between the Standard Model calculation from the Muon $g-2$ Theory Initiative [T. Aoyama et al. - Phys. Rept. 887 (2020), 1-166] and the experimental value is $a_{\mu}^{SM}-a_{\mu}^{exp}=(251\pm59)\cdot10^{-11}$, with a significance of $4.2\,\sigma$.
The anomaly was measured with a precision of $0.54\,$ppm by the Brookhaven E821 experiment and the E989 experiment at Fermilab aims for a four-fold improvement in precision, to confirm or refute the discrepancy. In Spring 2021, E989 published the first results of $a_{\mu}$ with a precision of $0.46\,$ppm from the 2018 data-taking campaign. The measurement of the anomalous muon spin precession frequency, $\omega_a$, is based on the arrival time distribution of high-energy decay positrons observed by $24$ electromagnetic calorimeters, placed around the inner circumference of the $g-2$ storage ring. This talk will present the status of $\omega_a$ analysis performed on the datasets collected during Run 2 and 3 (2019 and 2020 campaigns), with a preliminary analysis of the systematic uncertainties.
Among the simplest new physics explanations of the muon g-2 anomaly are scenarios with chirally enhanced contributions. The new particles can be very heavy, even beyond the reach of future colliders, and thus the confirmation of such explanations might rely only on indirect evidence. I will discuss that these models generically predict correlations with related signatures that include possible modifications of muon couplings to Z and W bosons, correlations with the deviation of $h \to ฮผ ฮผ$ from the SM prediction and muon EDM, or large rates for di-Higgs and tri-Higgs signals at a muon collider. In specific models some of these correlations are parameter free, presenting unique signals that can be tested without directly producing new particles.
The recent measurement of the muon g-2 at Fermilab confirms the previous Brookhaven result. The leading hadronic vacuum polarization (HVP) contribution to the muon g-2 represents a crucial ingredient to establish if the Standard Model prediction differs from the experimental value. A recent lattice QCD result by the BMW collaboration shows a tension with the low-energy e+eโโhadrons data which are currently used to determine the HVP contribution. We refer to this tension as the new muon g-2 puzzle. In this Letter we consider the possibility that new physics contributes to the e+eโโhadrons cross-section. This scenario could, in principle, solve the new muon g-2 puzzle. However, we show that this solution is excluded by a number of experimental constraints.
The discrepancy between the standard model prediction for the muon anomalous magnetic moment and the experimental result is accompanied by other anomalies. A crucial input for the prediction is the hadronic vacuum polarization inferred from hadronic data. However, the two most accurate determinations from KLOE and BaBar disagree by almost 3 sigma. Additionally, the combined data-driven result also disagrees with the most precise lattice determination. We show that all these discrepancies could be accounted for by a new boson produced resonantly around the KLOE centre of mass energy and decaying promptly yielding lepton pairs in the final states. We then present a simple model that can reconcile the KLOE and BaBar results, the data-driven and the lattice SM predictions, and eventually the predicted and measured values of the muon anomalous magnetic moment, while complying with all phenomenological constraints.
We reconsider kinetically mixed dark photons as an explanation of the $(g-2)_\mu$ anomaly. While fully visible and invisible dark photon decays are excluded, a semi-visible solution can still explain the discrepancy. We explicitly re-evaluate the constraints from B-factories and fixed-target experiments, namely BaBar and NA64, pointing to a solution in terms of dark sector models with dark neutral leptons with fast decays or co-annihilating dark matter candidates. Several of these models lead to upscattering signatures at neutrino experiments.
SUSY is still a viable solution to the muon $g$โ2 anomaly. Focusing on its minimal version (MSSM), I provide an overview of its current status, including constraints from the LHC Run 2, expected limits from future experiments, and connections to the relic density of the lightest SUSY particle as the dark matter.
I will cover all the four possible "simplified mass spectra" for the $g$โ2 anomaly and discuss which models are nice in terms of the relic density, which are disfavored by LHC results, which are tough to search for at experiments, and what we should do for further investigations.
The differential cross section of proton-proton elastic scattering $d \sigma/dt$, as a function of the magnitude of the four-momentum transfer squared $|t|$, evolves in a consistent way with $\sqrt{s}$ at LHC energies, all the curves being translated in the ($|t|$, $d\sigma/dt$) plane for different center-of-mass energies. This means that the cross sections vary according to a scaling law in center-of-mass energy $\sqrt{s}$ and on $|t|$.
These features suggest there are hidden universal properties of elastic scattering. Based on these empirical observations, and taking inspiration from saturation models, we propose a simple scaling law for proton-proton elastic scattering. The differential cross sections measured by TOTEM at $\sqrt{s} = 2.76, 7, 8,$ and $13$ TeV fall in a universal curve when they are mapped to the scaling variables $d\sigma/d|t| (s/\text{TeV}^2)^{-0.305}$ versus $(s/\text{TeV}^2)^{0.065} (|t|/\text{GeV}^2)^{0.72}$. In addition, we explore the implications of this scaling law in the impact parameter picture of the scattering amplitudes.
The elastic scattering of protons at 13 TeV is measured in the range of protons transverse momenta allowing the access to the Coulomb-Nuclear-Interference region. The data were collected thanks to dedicated special LHC beta* = 2.5 km optics. The total cross section as well as rho-parameter, the ratio of the real to imaginary part of the forward elastic scattering amplitude, are measured and compared to various models and to results from other experiments.
Proton and neutron electric and magnetic form factors are the primary characteristics of their spatial structure and have been studied extensively over the past half-century. One of the recent focal points is their behavior at large values of the momentum transfer $Q^2$, where one expects to observe transition from nonperturbative to perturbative QCD dynamics and detect effects of quark orbital angular momenta and diquark correlations. Multiple experiments at JLab and elsewhere are focussing on the momentum region up to $Q^2=18$ GeV$^2$ for the proton and up to 14 GeV$^2$ for the neutron. A theoretical study of these form factors is possible using nonperturbative QCD on the lattice, thanks to considerable increase in the efficiency of the techniques and computing resources. I am going to report our recent lattice QCD calculations of the $G_E$ and $G_M$ nucleon form factors performed with momenta up to $Q^2=12$ GeV$^2$, pion masses down to the almost-physical $m_\pi$=170 MeV, several lattice spacings down to $a=0.073$ fm, and high $O(10^5)$ statistics. Specifically, we study the $G_E/G_M$ ratios, asymptotic behavior of the $F_2/F_1$ ratios, and flavor dependence of contributions to the form factors. We observe remarkable qualitative agreement of our ab initio theory calculations with experiment. However, one of our intriguing findings is that the proton GE/GM ratio does not appear to cross zero near the $Q^2=8$ GeV$^2$ point, unlike extrapolated experimental data suggest. Comparison of our calculations and upcoming JLab experimental results will be an important test of nonperturbative QCD methods in the almost-perturbative regime.
KLOE and KLOE-2 data (almost 8 fb$^{-1}$) constitute a unique sample, rich in physics, and the largest dataset ever collected at an electron-positron collider operating at the $\phi$ peak resonance.
In total it corresponds to the production of about 24 billion $\phi$ mesons, whose decays include about 8 billion pairs of neutral K mesons and about 300 million $\eta$ mesons.
A wide hadron physic program, investigating rare meson decays, $\gamma\gamma$ interaction, and dark forces, is thus carried out by the KLOE-2 Collaboration.
The $\eta \to \pi^0 \gamma \gamma$ decay is a test bench for various models and effective theories like VMD (Vector Meson Dominance) or ChPT (Chiral Perturbation Theory) which predict BR far from the experimental value. KLOE-2, with its highly pure $\eta$ sample produced
in $\phi \to \eta \gamma$ process, can give a more refined measurement of this branching ratio.
KLOE-2 continues also its tradition on dark searches testing an opposite model to the U boson or "dark photon", where the dark force mediator is a hypothetical leptophobic B boson that could show up in the $\phi \to \eta B\to \eta \pi^0\gamma\,, \eta \to \gamma \gamma$ channel. The upper limit on the $\alpha_{\rm B}$ coupling constant will be shown.
A KLOE-2 distinctive feature is also the the possibility to investigate $\pi^0$ production from $\gamma \gamma$ scattering by
tagging final-state leptons from $e^+e^- \to \gamma^{(\ast)}\gamma^{(\ast)}e^+e^-\to \pi^0 e^+e^-$ in coincidence with the $\pi^0$ in the barrel calorimeter. The preliminary measurement of the $\gamma^{\ast}\gamma\to \pi^0$ cross section performed with single-tagged events will be reported.
Moreover, the search for the double suppressed $\phi\rightarrow \eta\, \pi^+ \pi^- $ and the conversion
$\phi\rightarrow \eta\, \mu^+ \mu^- $ decays are being performed at KLOE-2 with both $\eta \to \gamma \gamma$ and $\eta \to 3\pi^0$. Clear signals are seen for the first time.
Finally, preliminary and promising results on the $\omega$ cross-section measurement in the $e^+ e^- \to \pi^+ \pi^- \pi^0 \gamma_{\rm ISR}$ channel using the Initial State Radiation (ISR) method will be also presented.
Hard Exclusive Meson Production and Deeply Virtual Compton Scattering (DVCS) are widely used reactions to study Generalised Parton Distributions (GPDs). Investigation of GPDs represents one of the main goals of the COMPASS-II program. Measurements of the exclusive processes were performed at COMPASS in 2016 and 2017 at the M2 beamline of the CERN SPS using the 160 GeV/$c$ muon beam scattering off a 2.5m long liquid hydrogen target surrounded by a barrel-shaped time-of-flight system to detect the recoiling target proton. The scattered muons and the produced real photons were detected by the COMPASS spectrometer, supplemented by an additional electromagnetic calorimeter for the detection of large-angle photons.
Exclusive $\pi^{0}$ production is the main source of background for DVCS process, while it provides complementary information for parametrisation of GPDs. We will report on preliminary results on exclusive $\pi^{0}$ production cross-section and its dependence on the squared four-momentum transfer $|t|$ and on the azimuthal angle $\phi$ between the scattering plane and the $\pi^{0}$ production plane. The results will provide a further input to phenomenological models for constraining GPDs, in particular chiral-odd (โtransversityโ) GPDs.
AMBER is a newly proposed fixed-target experiment at the M2 beam line of the SPS, devoted to various fundamental QCD measurements, with a Proposal recently approved by the CERN Research Board for a Phase-1 program and a Letter of Intent made public for a longer term program.
Such an unrivaled installation would make the experimental hall EHN2 the site for a great variety of measurements to address fundamental issues of strong interactions in the medium and long-term future.
The elastic muon-proton scattering process, using high-energy muons, is proposed as a novel approach to the longstanding puzzle of the proton charge radius. Such a measurement constitutes a highly-welcomed complementary approach in this area of world-wide activity.
Operating with protons, the largely unknown antiproton production cross section can be measured, which constitutes important input for the upcoming activities in the Search for Dark Matter.
Especially the world-unique SPS M2 beam line, when operated with high-energy pions, can be used to shed light to the emergence of hadron masses. How can we explain the emergence of the proton mass and the nearly masslessness of the pion? The origin of hadron masses is deeply connected to the parton dynamics, and how it differs in protons or mesons.
For a longer-term program an upgrade of the M2 beam line with radio-frequency separation of kaon and antiproton beams of high energy and high intensity would allow for further unique opportunity to shed new light to the light meson structure and spectroscopy.
The rich physics program planned at the AMBER experiment will be presented. World competition and possible timelines will be discussed.
I will present our recent study of modeling uncertainties for fiducial signatures of the ttW process in the 3 lepton decay channel by comparing various approaches, such as full off-shell fixed-order NLO predictions and parton shower matched predictions based on on-shell ttW production. Finally, we provide a simple prescription to combine both approaches.
The large integrated luminosity accumulated by the ATLAS detector at the highest proton-proton collision energy provided by LHC allows the study of rare SM top quark production processes. The observation of associated production of top quarks has provided the first direct measurement of the top quark EW couplings with neutral gauge bosons and the first access to the four-top-quark interaction vertex. Using the data set collected during run 2 of the LHC (2015-2018, 140/fb of pp collisions at 13 TeV), the ATLAS experiment has observed ttX production, with X=gamma,Z,H and single top quark production with X=gamma,Z,W. In this contribution, the first differential measurements of the ttZ and ttgamma cross section are presented, as well as inclusive cross section measurement for tZq and tgammaq production. The latter is a brand new result, which corresponds to the first observation of this process. Results are also presented from the search for four-top-quark production, where ATLAS has combined searches in several channels to find strong (4.7 sigma) evidence for the existence of this elusive process.
A comprehensive set of measurements of top quark production in association with vector bosons (W, Z, gamma) using data collected by the CMS experiment is presented. Differential cross sections are measured as functions of several kinematic observables from the final state physics objects and compared to standard model predictions.
Since its discovery at the Large Hadron Collider in 2012 the Higgs boson has arguably become the most famous of the Standard Model particles and many measurements have been performed in order to asses its properties. Among others, these include measurements of the Higgs boson's ${\cal CP}$-state which is predicted to be ${\cal CP}$-even. Even though a pure ${\cal CP}$-odd state has been ruled out, a possible admixture of a ${\cal CP}$-odd Higgs state has yet to be excluded. In this talk we will present predictions for the associated production of a leptonically decaying top quark pair and a stable Higgs boson $p p \to e^+ \nu_e\, \mu^- \bar{\nu}_\mu\,b\bar{b}\,H$ with possible mixing between ${\cal CP}$-even and ${\cal CP}$-odd states at NLO in QCD for the LHC with $\sqrt{s} = 13$ TeV. Finite top-quark width effects as well as all double-, single- and non-resonant Feynman diagrams including their interference effects are taken into account. We compare the behaviour of the ${\cal CP}$-even, -odd and -mixed scenarios for the integrated fiducial cross sections as well as several key differential distributions. In addition, we show that both NLO corrections and off-shell effects play an important role even at the level of integrated fiducial cross sections and that these are further enhanced in differential distributions. Even though we focus here on the Standard Model Higgs boson, the calculations could be straightforwardly applied to models that have an extended Higgs sector and predict the existence of ${\cal CP}$-odd Higgs-like particles, such as the two-Higgs-doublet model.
One of the greatest achievements of the LHC has been the discovery of the Higgs boson in 2012. Since then, the properties of this newly discovered particle have been widely tested. For instance, we need to understand how this particle couples to the other fundamental particles. The coupling of the Higgs boson to the heaviest of the quarks, the top quark, has been probed first indirectly and, more recently, directly via the $t\bar{t}H$ process, first observed in 2018. The Higgs boson predominantly decays into a bottom-quark pair $H\rightarrow b\bar{b}$. Therefore, $t\bar{t}H(H\rightarrow b\bar{b})$ is a prime ingredient to extract information on the top-Yukawa coupling. Despite the larger statistics obtainable with this process, the large amount of jets present in the fully decayed final state complicates the picture. Indeed, this process suffers of a huge background. The top-quark production in association with a bottom-quark pair $t\bar{t}b\bar{b}$ represents an irreducible background to this process. Therefore, a correct description of this background is needed to discriminate it from the actual signal. In this talk I will present the latest theoretical results for $pp\rightarrow t\bar{t}b\bar{b}$ at the LHC in the dileptonic decay channel of the top quark. These predictions are NLO accurate in QCD and include all the full off-shell effects. I will also investigate the size of these full off-shell effects, comparing the full off-shell calculation to the one obtained in the Narrow-Width Approximation. The fully decayed final state presents at least four $b$-jets, two coming from the top-quark decays and two mainly from gluon splitting. I will refer to the latter as prompt $b$-jets. Hence, in this talk, I will also provide a prescription to distinguish the prompt $b$-jets from those coming from the decay of the top quarks. The importance of this study is again related to the fact that $t\bar{t}b\bar{b}$ has the same final state as $t\bar{t}H$ when the Higgs boson decays into $H\rightarrow b\bar{b}$. Therefore, this prescription can also be used for $t\bar{t}H(H\rightarrow b\bar{b})$ to distinguish between the decay products of the Higgs boson and the top quarks.
In this talk we show our results for the soft-gluon threshold resummation of the four-top production process. This is the first time threshold resummation is applied to a 2$\rightarrow$4 type hadronic process. By matching the next-to-leading logarithmic (NLL) result to the available next-to-leading order (NLO) result, we achieve NLO+NLLโ precision for the total cross section, where we take into account $\mathcal{O}(\alpha_s)$ terms that are constant at threshold. The threshold-resummed result shows a decreased scale dependence when compared to the fixed next-to-leading-order calculation.
The long-standing mismatch between the measured muon magnetic moment and its Standard Model (SM) prediction (the so called (g-2)$_{\mu}$ anomaly) remains one of the most pressing questions in particle physics. Recently, the Muon g-2 Collaboration at Fermilab reported its latest results on the muon magnetic moment measurement. The combination of this measurement with the previous Brookhaven Muon g-2 experiment results compared to the updated theoretical value confirms a discrepancy of 4.2$\sigma$ with respect to its Sฮ prediction. There is an ongoing worldwide experimental and theoretical effort to elucidate this anomaly.
The existence of a sub-GeV Zโ boson appearing as SM extension by gauging the difference of the lepton number between the muon and tau flavour, $L_\mu - L_\tau$, is one of the most appealing New Physics extensions to explain this anomaly. The g-2 discrepancy can be generated via 1-loop Zโ contributions. Furthermore, the Z' can mediate a new interaction between the SM and Dark Matter (DM), explaining DM as a thermal-freeze out relic. Such a boson can be produced in the reaction of a high energy muon scattering off a nuclei via Dark Bremsstrahlung and searched for in missing energy events due to Z' decays either to neutrinos or DM particles. NA64$_\mu$ is a pioneer missing-energy experiment using the unique M2 muon high-energy and high-intensity beam-line at CERN Super Proton Synchrotron accelerator. A pilot run of the experiment took place in 2021 testing the feasibility of the technique, measuring for the first time the beam properties, the trigger rate and the reconstructed muon momentum at a low beam intensity. In 2022 the experiment will resume data taking. The results from both runs and the future prospects of the experiment to decisively demonstrate if the existence of a light Zโ could explain this anomaly, will be discussed in this talk.
Hadronic ฯ decays are studied as probe of new physics. We determine the dependence of several inclusive and exclusive ฯ observables on the Wilson coefficients of the low-energy effective theory describing charged-current interactions between light quarks and leptons. The analysis includes both strange and non-strange decay channels. The main result is the likelihood function for the Wilson coefficients in the tau sector, based on the up-to-date experimental measurements and state-of-the-art theoretical techniques. The likelihood can be readily combined with inputs from other low-energy precision observables. We discuss a combination with nuclear beta, baryon, pion, and kaon decay data. In particular, we provide a comprehensive and model-independent description of the new physics hints in the combined dataset, which are known under the name of the Cabibbo anomaly.
Metastable pionic helium is a three-body exotic atom composed of a helium nucleus, electron, and negatively-charged pion occupying a highly-excited state with principal and orbital angular momentum quantum numbers of nโl-1โ17 [1,2] with a 7 ns average lifetime. We recently used the 590 MeV ring cyclotron facility of PSI to synthesize pionic helium atoms in a helium target, and induced an infrared pionic transition (n,l)=(17,16)โ(17,15) at a resonance frequency ฮฝ=183760 GHz. This laser transition triggered an electromagnetic cascade that resulted in the ฯ- being absorbed into the helium nucleus. By further improving the experimental precision and comparing the atomic frequencies with the results of three-body QED calculations, the pion mass may be determined to a high precision. Limits may also be established on exotic forces that arise between pions and nuclei.
In antiprototnic helium atoms, the antiproton occupies a state of nโl-1โ38. The ASACUSA collaboration at CERN's Antiproton Decelerator facility observed an anomalous narrowing of the laser resonance lines for atoms embedded in superfluid helium so that a resolution of 2 ppm was achieved despite the fact that the atom was surrounded by a matrix of helium atoms. This may imply that exotic atoms containing kaons or other negatively-charged hadron may also be studied with a high spectral resolution [3]. We intend to improve the precision of the experiments in the future, so that quantum electrodynamics in a hadron-antihadron bound system may be studied to heretofore unprecedented precision [4,5].
[1] M. Hori, H. Aghai-Khozani, A. Sรณtรฉr, A. Dax, D. Barna โLaser spectroscopy of pionic helium atomsโ Nature 581, 37 (2020).
[2] M. Hori, A. Sรณtรฉr, V. I. Korobov, โProposed method for laser spectroscopy of pionic helium atoms to determine the charged-pion massโ Phys. Rev. A 89, 042515 (2014).
[3] A. Sรณtรฉr, H. Aghai-Khozani, D. Barna, A. Dax, L. Venturelli, M. Hori, "High-resolution laser resonances of antiprotonic helium in superfluid 4He" Nature 603, 411 (2022).
[4] M. Hori et al., โTwo-photon laser spectroscopy of antiprotonic helium and the antiproton-to-electron mass ratioโ Nature, 475, 484 (2011).
[5] M. Hori et al., โBuffer-gas cooling of antiprotonic helium to 1.5 to 1.7 K, and antiproton-to-electron mass ratioโ Science 354, 610 (2016).
The Deep Underground Neutrino Experiment (DUNE) is an international project aiming at neutrino physics and astrophysics and a search for phenomena predicted by theories beyond the standard model. The excellent imaging capability of Liquid Argon Time Projection Chamber (LArTPC) technology, particle tracking and identification utilized in the Far Detector allow the experiment to achieve high sensitivity to various rare processes. Grand Unified Theories (GUTs) predict a baryon number non-conservation effects, such as nucleon decay. Some GUTs, including those based on Supersymmetry (SUSY), favor nucleon decays with a kaon in the final state. Here we discuss the sensitivity of DUNE to some nucleon decay modes. With full event simulation and reconstruction using the LArSoft package, we have investigated the background to nucleon decay events from atmospheric neutrino interactions, and particle misidentification and utilized machine learning techniques to enhance the discrimination between signal and background.
The 2020 Update for the European Strategy for Particle Physics explicitly highlights the need for programs at the so-called intensity frontier which exploit the unique potential of European laboratories. The European Spallation Source (ESS), presently under construction, in Lund, Sweden, is a multi-disciplinary international laboratory that will operate the world's most powerful pulsed neutron source. Taking advantage of this, the HIBEAM/NNBAR collaboration[1] proposed a two-stage program of experiments to perform high precision searches for neutron conversion in a range of baryon number violation channels. This culminates in an ultimate sensitivity increase for neutron โ antineutron oscillations of three orders of magnitude over the previously attained limit obtained at the Institut Laue-Langevin ILL. The opportunity for a three-orders-of-magnitude improvement in the test of a global symmetry is rare.
The observation of BNV via free neutron oscillations would be of fundamental significance, with implications for many open questions in modern physics which include the origin of the matter-antimatter asymmetry, dark matter, the possible unification of fundamental forces, and the origin of neutrino mass.
The first stage of this program HIBEAM (High Intensity Baryon Extraction and Measurement) will exploit the ESS fundamental physics beamline. This stage focuses principally on searches for neutron conversion to sterile neutrons n' that would belong to a โdarkโ sector and a search for neutron โ antineutron via intermediate dark neutron states.
The second stage, NNBAR, will exploit a large beam port, specifically designed for this experiment in the ESS target station monolith to maximize the neutron flux and search directly for neutron โ antineutron oscillations.
The NNBAR experiment would take neutrons produced from the ESS source which would then be reflected and focused through a magnetic field-free region towards a distant carbon target. The target is surrounded by a detector to observe a multi-pion state arising from the baryon number annihilation signal of an antineutron with a nucleon in a carbon nucleus.
Work is ongoing for conceptual design reports for both the HIBEAM and NNBAR states.
This talk will give an overview of the motivation of the experiment, the experimental techniques, and the current state of the research.
[1] A. Addazi et al., J.Phys.G 48 (2021) 7, 070501
Search for possible violation of combined charge, parity, and time-reversal symmetries is yet another approach for a test of New Physics, therefore a bound state of electron and positron (positronium) as the lightest matter-antimatter system and at the same time aneigenstate of the C and P operators is an unique probe in such endeavour. The test is performed by measurement of angular correlations in the annihilations of the lightest leptonic bound system. With the Jagiellonian Positron Emission Tomograph (J-PET) we have collected an unprecedented range of kinematical configurations of exclusively-recorded annihilations of the positronium triplet state (ortho-positronium) into three photons. Employing a novel technique for estimation of positronuium spin axis on the basis of a single event, we determined the complete distribution of an angular correlation between spin and annihilation plane of ortho-positronium. We present recently published result of determined expectation value of this correlation at the precision level of 10$^{-4}$, with an over three-fold improvement on the previous measurement. We discuss also the prospects for reaching the precision level of 10$^{-5}$ with the CPT symmetry test at the J-PET detector.
Tests of Lorentz invariance continue to inform and challenge our modern understanding of spacetime symmetries. Using a model-independent framework based on effective field theory, generic perturbations from exact Lorentz invariance, CPT invariance, and other fundamental symmetries can be studied in a wide class of physical systems. Despite the large number of constraints extracted over the past two decades, stringent limits on many quark-sector effects remain relatively scarce. We present preliminary results on several coefficients parametrizing Lorentz- and CPT-violating effects in the quark sector using deep inelastic scattering data collected in 2006 by the ZEUS experiment.
In the past few years, using Machine and Deep Learning techniques has become more and more viable, thanks to the availability of tools which allow people without specific knowledge in the realm of data science and complex networks to build AIs for a variety of research fields. This process has encouraged the adoption of such techniques: in the context of High Energy Physics, new algorithms based on ML are being tested for event selection in trigger operations, end-user physics analysis, computing metadata based optimizations, and more. Time critical applications can benefit from implementing algorithms on low-latency hardware like specifically designed ASICs and programmable micro-electronics devices known as FPGAs. The latter offers a unique blend of the benefits of both hardware and software. Indeed, they implement circuits just like hardware, providing power, area and performance benefits over software, yet they can be reprogrammed cheaply and easily to implement a wide range of tasks, at the expense of performance with respect to ASICs.
In order to facilitate the translation of ML models to fit in the usual workflow for programming FPGAs, a variety of tools have been developed. One example is the HLS4ML toolkit, developed by the HEP community, which allows the translation of Neural Networks built using tools like TensorFlow to a High-Level Synthesis description (e.g. C++) in order to implement this kind of ML algorithms on FPGAs.
This paper presents and discusses the activity started at the Physics and Astronomy department of University of Bologna and INFN-Bologna devoted to preliminary studies for the trigger systems of the Compact Muon Solenoid (CMS) experiment at the CERN LHC accelerator. A broader-purpose open-source project from Xilinx (a major FPGA producer) called PYNQ is being tested combined with the HLS4ML toolkit. The PYNQ purpose is to grant designers the possibility to exploit the benefits of programmable logic and microprocessors using the Python language. This software environment can be deployed on a variety of Xilinx platforms, from IOT devices like the ZYNQ-Z1 board, to the high performance ones, like Alveo accelerator cards and on the cloud AWS EC2 F1 instances. The use of cloud computing in this work allows us to test the capabilities of this workflow, from the creation and training of a Neural Network and the creation of a HLS project using HLS4ML, to managing NN inference with custom Python drivers.
The main application explored in this work lives in the context of the trigger system of the CMS, where new reconstruction algorithms are being developed due to the advent of the High-Luminosity phase of the LHC (HL-LHC) and the related increase in instantaneous luminosity. Indeed, Machine Learning techniques have already shown promising results in the measurement of transverse momentum of muons at the Level-1 trigger. By implementing such a model on an FPGA, we can take advantage of the smaller latencies with respect to traditional inference algorithms running on CPU.
This work is also part of a project aimed at creating and offering an FPGA-as-a-Service as part of the INFN Cloud service catalogue. In this context, new Alveo boards have been purchased by INFN in order to test the feasibility of the proposal and to compare the performance obtained using FPGAs made available on INFN Cloud, with those obtained by using third party cloud computing, in particular Amazon EC2 F1 instances.
Hardware and software set-up, together with performance tests on various baseline models used as benchmarks, will be presented.The presence or not of some overhead causing an increase in latency will be investigated. Eventually, the consistency in the predictions of the NN, with respect to a more traditional way of interacting with the FPGA using C++ code, will be verified.
After LS3 the LHC will increase its instantaneous luminosity by a factor of 7, leading to the High Luminosity LHC (HL-LHC). At the HL-LHC, the number of proton-proton collisions in one bunch crossing (called pileup) will increase significantly, putting more stringent requirements on the LHC detectors' electronics and real-time data processing capabilities.
The ATLAS Liquid Argon (LAr) calorimeter measures the energy of particles produced in LHC collisions. LAr also has trigger capabilities to identify potentially events. In order to enhance the ATLAS detector physics discovery potential in the blurred environment created by the pileup, an excellent resolution of the deposited energy and an accurate detection of the deposited time is crucial.
The computation of the deposited energy is performed in real-time using dedicated data acquisition electronic boards based on FPGAs. FPGAs are chosen for their capacity to treat large amounts of data with very low latency. The computation of the deposited energy is currently done using optimal filtering algorithms that assume a nominal pulse shape of the electronic signal. These filter algorithms are adapted to the ideal situation with very limited pileup and no timing overlap of the electronic pulses in the detector. However, with the increased luminosity and pileup, the performance of the filter algorithms decreases significantly and no further extension nor tuning of these algorithms could recover the lost performance.
The back-end electronic boards for the Phase-II upgrade of the LAr calorimeter will use the next high-end generation of INTEL FPGAs with increased processing power and memory. This is a unique opportunity to develop the necessary tools, enabling the use of more complex algorithms on these boards. We developed several neural networks (NNs) that show significant performance improvements with respect to the optimal filtering algorithms. The main challenge is to efficiently implement these NNs into the dedicated data acquisition electronics. Special effort was dedicated to minimising the needed computational power while optimising the NNs architectures.
Five NN algorithms based on CNN, RNN, and LSTM architectures will be presented. The improvement of the energy resolution and the accuracy on the deposited time compared to the legacy filter algorithms, especially for overlapping pulses, will be discussed. The implementation of these networks in firmware will be shown. Two implementation categories in VHDL and Quartus HLS code are considered. The implementation results on Stratix 10 INTEL FPGAs, including the resource usage, the latency, and operation frequency will be reported. Approximations for the firmware implementations, including the use of fixed-point precision arithmetic and lookup tables for activation functions, will be discussed. Implementations including time multiplexing to reduce resource usage will be presented. We will show that two of these NNs implementations are viable solutions that fit the stringent data processing requirements on the latency (O(100ns)) and bandwidth (O(1Tb/s) per FPGA) needed for the ATLAS detector operation.
he ATLAS experiment plans to upgrade its Trigger DAQ system dedicated to HL-LHC. Due to the expected large amount of data, one of the key upgrades is how to filter the events in a short time. Part of the filtering is performed based on calorimeter and muon spectrometer information, and then further event filtering is done in the Event Filter (EF) system with data including ones from the inner tracker (ITk). From the ITk, O(10^5-6) clusters are expected per beam-crossing. Within those clusters, EF needs to perform regional tracking at 1 MHz.
In this report, we will introduce one of the proposals for this event filtering using an FPGA based solution. In this setup, we adopt a hough transform algorithm on FPGA to filter the cluster candidates associated with the track. The algorithm has been implemented on the VC709 board which is mounted Virtex-7 FPGA. In order to evaluate its performance, we used simulated hit clusters from a single muon event under 200 of the pileup events.
We propose a signal-agnostic strategy to reject QCD jets and identify anomalous signatures in a High Level Trigger (HLT) system at the LHC. Soft unclustered energy patterns (SUEP) could be such a signal โ predicted in models with strongly-coupled hidden valleys โ primarily characterized by a nearly spherically-symmetric signature of an anomalously large number of soft charged particles, in contrast with a comparatively collimated spray-of-hadrons signature of QCD jets. We target the experimental nightmare scenario, i.e., SUEP in exotic Higgs decays, where all dark hadrons decay promptly to standard model hadrons. We design a three-channel convolutional autoencoder (reconstructed energy deposits at the HLT in the eta-phi plane in inner-tracker, electromagnetic calorimeter, and hadron calorimeter). By processing raw-event information, this application would be ideal for central online or offline computing workflows. Our study focuses on detecting a SUEP signal; however, the technique can be applied to any new physics model that predicts signatures anomalous to QCD jets.
This submission describes revised plans for Event Filter Tracking in the upgrade of the ATLAS
Trigger and Data Acquisition system for the high pileup environment of the High-Luminosity
Large Hadron Collider (HL-LHC). The new Event Filter Tracking system is a flexible,
heterogeneous commercial system consisting of CPU cores and possibly accelerators (e.g.,
FPGAs or GPUs) to perform the compute-intensive Inner Tracker charged particle
reconstruction. Demonstrators based on commodity components have been developed to
support the proposed architecture: a software-based fast tracking demonstrator, an FPGA-based
demonstrator, and a GPU-based demonstrator. Areas of study are highlighted in view of a final
system for HL-LHC running.
The High-Luminosity LHC (HL-LHC) will usher a new era in high-energy physics. The HL-LHC experimental conditions entail an instantaneous luminosity of up to 75 Hz/nb and up to 200 simultaneous collisions per bunch crossing (pileup). To cope with those conditions, the CMS detector will undergo a series of improvements, in what is known as the Phase-2 upgrade. In particular, the upgrade of the Data Acquisition and of the High-Level Trigger (DAQ-HLT) will have to address a much higher event rate and more complex events. In this talk, we will discuss the aspects of the HLT upgrade, detailing the development of the online reconstruction, the construction, characterisation and timing/rate measurement of a simplified HLT menu, the role of heterogeneous architectures in the HLT and the plan of work and milestones until the beginning of Phase-2.
We developed a novel free-running data acquisition system for the AMBER experiment. The system is based on a hybrid architecture containing scalable FPGA cards for data collection and conventional distributed computing. The current implementation is capable to collect up to 10 GB/s sustained data rate. The data reduction is performed by the filtration farm that decreases the incoming data rate by factor 50 to 100-200 MB/s. The filtration framework implements various optimized filter algorithms for different physics programmes. These algorithms perform a partial data decoding, time and spatial analysis of the data in order to take a valid filter decision in a semi-online manner. Our system also exploits the mechanism of continuous and iterative time calibration of detectors, which is required by the continuously running acquisition system. Additionally, this contribution describes a simulation tool able to calculate detector responses to passing particles and convert them into raw data formatted in the free-running protocol. These artificial data are used for testing and validation of the readout chain and the filtration framework. The entire system will be tested with a limited number of detectors this year.
The excess of gamma rays in the data measured by the Fermi Large Area Telescope from the Galactic center region is one of the most intriguing mysteries in Astroparticle Physics. This Galactic center excess (GCE), has been measured with respect to different interstellar emission models, source catalogs, data selections and techniques. Although several proposed interpretations have appeared in the literature, there are no firm conclusions as to its origin. The main difficulty in solving this puzzle lies in modeling a region of such complexity and thus precisely measuring the characteristics of the GCE. In this presentation I will show the results obtained for the GCE by using 11 years of Fermi-LAT data, state of the art interstellar emission models, and the newest 4FGL source catalog to provide precise measurements of the energy spectrum, spatial morphology, position, and sphericity of the GCE. I will also present constraints for the interpretation as dark matter particle interactions using the GCE, a gamma-ray analysis of dwarf spheroidal galaxies with LAT data and AMS-02 cosmic-ray antiprotons and positrons flux data.
The detection of line-like TeV gamma-ray features configures as a smoking gun for the discovery of TeV-scale particle dark matter. We report the first search for dark matter spectral lines in the Galactic Centre region up to gamma-ray energies of 100 TeV with the MAGIC telescopes, located on the Canary island of La Palma (Spain). This region is expected to host the most easily detectable dark matter halo due to its size and proximity and is therefore well suited for this kind of search. Observations at large-zenith angles significantly increase the telescopesโ collection area and sensitivity for gamma rays in the TeV regime. We present the results obtained with more than 200 hours of large-zenith angle observations of the Galactic Centre region with MAGIC, which allow us to obtain competitive limits to the dark matter annihilation cross-section at heavy particle masses ($\langle\sigma v\rangle$ <5ร$10^{-28}$ $\mathrm{cm^3 s^{-1}}$ at 1 TeV and $\langle\sigma v\rangle$< 1ร$10^{-25} \mathrm{cm^3 s^{-1}}$ at 100 TeV), improving the best current constraints above 20 TeV. In addition, we also study the impact of an inner cored dark matter halo on the probing of the annihilation cross-section as a conservative scenario. Finally, we use the derived limits to constrain super-symmetric wino models.
Precision measurements of cosmic ray positrons are presented up to 1.4 TeV based on 3.4 million positrons collected by the Alpha Magnetic Spectrometer on the International Space Station. The positron flux exhibits complex energy dependence. Its distinctive properties are: (a) a significant excess starting from 24.2 GeV compared to the lower-energy, power-law trend; (b) a sharp drop-off above 268 GeV; (c) in the entire energy range the positron flux is well described by the sum of a term associated with the positrons produced in the collision of cosmic rays, which dominates at low energies, and a new source term of positrons, which dominates at high energies; and (d) a finite energy cutoff of the source term at 887 GeV is established with a significance of 4.5ฯ. These experimental data on cosmic ray positrons show that, at high energies, they predominantly originate either from dark matter annihilation or from new astrophysical sources.
The fluxes and flux ratios of charged elementary particles in cosmic rays are presented in the absolute rigidity range from 1 up to 2000 GV. In the absolute rigidity range โผ60 to โผ500 GV, the antiproton and positron fluxes are found to have nearly identical rigidity dependence. In this presentation particular emphasis is made on new observations of the properties of elementary particles in the rigidity range above 500 GV.
Extraterrestrial neutrinos can be used as messengers to probe the presence of dark matter particles in our Galaxy. Indeed, sizable fluxes of high-energy neutrinos are expected from pair annihilation and decay of dark matter in regions where it accumulates to a high density. Massive celestial bodies such as the Sun and the very large reservoir at the centre of the Milky Way were inside the field of view of the ANTARES neutrino telescope, which was operated underwater in the Mediterranean Sea for 16 years and was recently decommissioned. ANTARES could trace the arrival direction of neutrinos with a precision of half a degree. A search for signatures of Weakly Interacting Massive Particles (WIMPs) has been performed in 14 years of all-flavour neutrino data, yielding competitive upper limits on the strength of WIMP annihilation. Other non-WIMP landscapes, such as model predicting heavy dark matter candidates, have been tested with dedicated searches in ANTARES data. Indirect dark matter searches are being continued with the KM3NeT telescopes, currently in construction in the Mediterranean Sea.
Neutrino detectors, such as the IceCube telescope, can be used to perform indirect dark matter searches. Under the assumption that dark matter is made of Weakly Interacting Massive Particles (WIMPs), Standard Model particles are expected to be created by its annihilation or decay. These Standard Model particles could in turn produce neutrinos detectable by the IceCube neutrino telescope. As our galaxy is believed to be embedded in a halo of dark matter whose density increases towards its centre, the Galactic Centre represents an ideal target for indirect searches, with the strongest dark matter annihilation signal at Earth being expected from this direction. In this contribution, the sensitivities of a low energy indirect search for dark matter in the Galactic Centre are presented, along with the results of a dark search towards higher energies, both using IceCube data. The low energy dark matter search uses eights years of DeepCore data and probes dark matter masses ranging from 5 GeV to 1 TeV, for annihilation through $\nu_e\bar{\nu}_e, \nu_{\mu}\bar{\nu}_{\mu}, \nu_{\tau}\bar{\nu}_{\tau},{\mu}^+{\mu}^-,{\tau}^+{\tau}^-,W^+W^-$ and $b^+b^-$. When considering dark matter annihilation into ${\tau}^+{\tau}^-$, the sensitivities on the thermally-averaged WIMP self-annihilation cross-section achieved by this analysis demonstrate an improvement by an order of magnitude over previous searches with IceCube and other neutrino telescopes. For the second analysis included in this contribution, five years of IceCube data are considered to search for neutrinos from the annihilation and the decay of dark matter particles with masses between 10 GeV and 40 TeV. When considering the $\nu\bar{\nu}$ channel, this analysis provides the best limits on the thermally-averaged self-annihilation cross-section for masses below 1 TeV, as well as the leading lower limits in terms of dark matter decay lifetime from neutrino experiments.
The Mu2e experiment at Fermi National Accelerator Laboratory will search for charged-lepton flavour violating neutrino-less conversion of negative muons into electrons in the coulomb field of an Al nucleus. The conversion electron has a monoenergetic 104.967 MeV signature slightly below the muon mass and will be identified by a complementary measurement carried out by a high-resolution tracker and an electromagnetic calorimeter (EMC), reaching a single event sensitivity of about $3\cdot10^{โ17}$, four orders of magnitude beyond the current best limit.
The calorimeter is composed of 1348 pure CsI crystals, each read by two custom UV-extended SiPMs, arranged in two annular disks. The EMC has high granularity, 10% energy resolution and 500 ps timing resolution for 100 MeV electrons and will need to maintain extremely high levels of reliability and stability and in a harsh operating environment with high vacuum, 1 T B-field and radiation exposures up to 100 krad and $10^{12} n_{1 MeV}/cm^2$.
The calorimeter design, along with the custom front-end electronics, cooling and mechanical systems were validated through an electron beam test on a large-scale 51-crystals prototype (Module-0). Extensive test campaigns were carried out to characterise and verify the performance of crystals, photodetectors, analogue and digital electronics, including hardware stress tests and irradiation campaigns with neutrons, protons, and photons. The production and QC phases of all calorimeter components is about to be completed. A full vertical slice tests with the final electronics has been carried out on the Module-0 at LNF, along with the implementation and validation of the calibration procedures. Final assembly is due in summer 2022. Status of construction and assembly will be summarised, along with plans for commissioning and first calibration.
The Mu2e experiment at Fermilab will search for the Standard Model forbidden conversion, within the field of a nucleus, of a negative muon into an electron. A clean discovery signature is provided by the observation of mono-energetic conversion electrons with energy of 104.967 MeV. If the conversion is not observed, Mu2e can set a limit of the ratio between the conversion and the capture rate below $8\cdot 10^{-17}$ with a 90$\%$ confidence level, improving by 5 orders of magnitude the current limit.
Mu2e is made by a very large solenoidal system for the production and transport of the muon beam, two detectors (a tracker and a calorimeter) for the analysis of the produced particles and a cosmic ray veto.
The Mu2e calorimeter complements the tracking information, providing track seeding and particle identification to help reconstructing the mono-energetic electron candidates.
In order to do this, the calorimeter is required to achieve an energy resolution $<10\%$ and a time resolution of the order of 500 ps for 100 MeV electrons, all while operating in vacuum, in a 1T magnetic field and in a strong radiation environment.
The calorimeter is made of two annular disks, each one filled with 674 pure CsI crystals. Each crystal is read by two custom made arrays of UV-extended Silicon Photomultipliers (SiPMs). Two SiPMs glued on a copper holder and two independent Front End Electronics (FEE) boards form a Readout Unit (ROU). To ensure consistency and reliability of the ROUs, we have designed, assembled and put in operation an automated Quality Control (QC) station to test the O(1500) units to be assembled.
The QC station is located at LNF (Laboratori Nazionali di Frascati) and can test two ROUs at the same time.
The SiPMs see the light of a 420 nm pulsed LED, attenuated by means of an automated nine position filter wheel. The transmitted light is diffused uniformly on the SiPMs surface thanks to a box with sanded glass that also provides light tightness and allows to work in a controlled environment, thus ensuring good reproducibility of the measurements. The ROUs are held in place by an aluminum plate that serves also as a conductive medium for temperature stabilization at 25 ยฐC, obtained with an external chiller.
The ROUs are powered by a low voltage and a high voltage power supply which are controlled remotely. The data acquisition of the FEE signals is handled by a Mezzanine Board and a Master Board (Dirac), controlled via USB with Python and C++ programs. The data acquisition has been parallelized and 10000 events per wheel position can be acquired in around one minute.
A scan at different light intensities is performed for each of the selected supply voltages, V$_{i}$, around the SiPM operational voltage, V$_{op}$, thus allowing to reconstruct the response, the gain, the photon detection efficiency and their dependence on V$_{i}$-V$_{op}$.
We will present the first results obtained on a large sample of production ROUs and the achieved reproducibility.
Dual-readout fibre-based calorimeters have been demonstrated to achieve a superior built-in energy resolution, that can be further enhanced by the application of post-processing reconstruction techniques (like particle flow, for example). A prototype built starting from capillary tubes as basic elements has been exposed to test beams with energies ranging from 1 to 100 GeV to measure the electron response and shower shape. The talk will discuss the test beam results and their agreement with the full Geant4 simulation, focusing on the implications for the construction of a full-scale calorimeter prototype. Emphasis will be given to precision measurement of the lateral electromagnetic shower shape, obtained thanks to the high transverse granularity of the calorimeter.
The aim of the LHCb Upgrade II is to operate at a luminosity of 1.5 x 10^34 cm^-2 s^-1 to collect a data set of 300 fb^-1. This will require a substantial modification of the current LHCb ECAL due to high radiation doses in the central region and increased particle densities. A consolidation of the ECAL already during LS3 would reduce the occupancy in the central region and mitigate substantial ageing effects there after Run 3. Several scintillating sampling ECAL technologies are currently being investigated in an ongoing R&D campaign as the baseline solution: Spaghetti Calorimeter (SpaCal) with garnet scintillating crystals and tungsten absorber, SpaCal with scintillating plastic fibres and tungsten or lead absorber, and Shashlik with polystyrene tiles, lead absorber and fast WLS fibres. Timing capabilities with tens of picoseconds precision for neutral electromagnetic particles and increased granularity with denser absorber in the central region are needed for pile-up mitigation. Time resolutions of 15 ps at high energy were recently observed in test beam measurements of prototype SpaCal and Shashlik modules. Energy resolutions with sampling contributions of about 10%/sqrt(E) in line with the requirements were observed. The presentation will also cover results from detailed simulations to optimise the design and physics performance of the Upgrade II ECAL.
Future electron-positron colliders, or Higgs factories, impose stringent requirements on the energy resolutions of hadron and jets for the precision physic programs of the Higgs, Z, W bosons and the top quark. Based on the particle-flow paradigm, a novel highly granular crystal electromagnetic calorimeter (ECAL) has been proposed to address major challenges from the jet reconstruction and to achieve the optimal electromagnetic energy resolution of around $2-3~\%/\sqrt{E(GeV)}$ with the homogenous structure.
R&D efforts are ongoing to evaluate the requirements on the sensitive detector units and and physics potentials of the crystal calorimeter concept within a full detector system. The requirements on crystal options, photon sensors as well as readout electronics are parameterised and quantified in a full simulation model based on Geant4. Experiments including characterisations of crystals and silicon photomultipliers (SiPMs) have been followed to validate simulation results and optimise simulation parameters. A small-scale ECAL module with a crystal matrix and SiPM arrays is under development for future beam tests to study the performance of EM showers.
Physics performance of the crystal ECAL has been studied with some Higgs physics benchmarks using the particle-flow algorithm "ArborPFA". Compared with the sampling structure of the existing high granularity calorimeters, the crystal ECAL option poses extra difficulties for the cluster pattern recognition and thus demands further PFA optimisations. Progress has been made on optimising the ArborPFA algorithm and parameters therein, leading to a significant improvement of the separation efficiency for close-by showers.
For the highly granular crystal ECAL, a new detector layout has been proposed with long crystal bars arranged to be orthogonal to each other in two neighbouring layers, and targets for a maximum longitudinal segmentation, minimum inactive materials in between and a significant reduction of readout channels (with 2-channel readout at both ends of each long crystal bar), but it also poses challenges of the pattern recognition and separation of close-by showers. Therefore, a dedicated reconstruction software is also being developed to address these challenges.
This contribution will present the latest results on the novel crystal ECAL, including simulation and reconstruction studies, hardware developments and physics potentials.
The Crilin (CRystal calorImeter with Longitudinal INformation) calorimeter is a semi- homogeneous calorimeter based on Lead Fluoride (PbF$_2$) Crystals readout by surface-mount UV-extended Silicon Photomultipliers (SiPMs). It is a proposed solution for the electromagnetic calorimeter of the Muon Collider. In a Muon Collider, the timing could be used to remove signals produced by beam-induced background, asynchronous with respect to the bunch crossing. The calorimeter energy resolution is also fundamental to measure the kinematic properties of jets. Moreover, the calorimeter should operate in a very harsh radiation environment, withstanding yearly a neutron flux of 10$^{14}$ n$_{1MeV}$ /cm$^2$ and a dose of 100 krad. The proposed Crilin calorimeter is characterized by a modular architecture base on stackable submodules composed of matrices of $PbF_2$ crystals, with each crystal readout by 2 series of UV-extended SiPMs. To evaluate the effect of this high radiation environment on the PbF$_2$, two crystals have been irradiated both with photons and neutrons to study the changes in their transmittance. The results of this study will be presented as well as the first results obtained from a test with electron beam ranging from 20 up to 120 GeV performed at CERN in August 2021 with a small-scale prototype with 2 crystals and 4 amplification channels will be reported. During 2021 an intense effort has been made to improve the performance of the electronics. The evolution of this R&D study, that allowed obtaining a time resolution $\sigma_t$<20 ns, will also be presented.
It is well known that the lattice structure of a scintillating crystal can influence the development of the electromagnetic processes inside it. For electron and photon beams aligned with the symmetry axis of a crystal, if the strong field condition is satisfied, a reduction of the radiation length X$_0$ is expected. However, these effects have been experimentally observed only in the last few years, with crystal samples limited in number, composition and length. The lack of experimental data for these phenomena makes it harder to properly account for them in the design and simulation of innovative radiation detectors and equipment, such as active beam dumps or compact electromagnetic calorimeters. Recent experiments, performed by the STORM and KLEVER collaborations at the CERN SPS extracted beam lines, demonstrated a significant reduction of X$_0$ for electron and photon beams impinging on a crystal within $\sim 0.1^\circ$ from one of its symmetry axes. This contribution will describe such experiments, reporting preliminary results for a variety of scintillating crystals (a 1 X$_0$ PbWO$_4$, a 2 X$_0$ PbWO$_4$ and a 1 X$_0$ pure W) and also, for the first time, for an oriented Cherenkov crystal (a 1 X$_0$ PbF$_2$).
How are we doing? Learning from others - demographics and practice sharing
The ATLAS Collaboration consists of more than 5000 members, from over 100 different countries. Regional, age and gender demographics of the collaboration are presented, including the time evolution over the lifetime of the experiment. In particular, the relative fraction of women is discussed, including their share of contributions, recognition and positions of responsibility, including showing how these depend on other demographic measures.
Established in 2019, the ALICE Diversity Office monitors issues of diversity, promotes initiatives regarding diversity and inclusion within the ALICE Collaboration, and serves as a direct liaison with the diversity offices at CERN and the other experiments at the LHC. In addition, the ALICE Diversity Office collects yearly statistics used to investigate the time evolution of membership trends at ALICE across various categorical demographic and geographical variables, such as gender, career status, and region of affiliation. This talk will provide a summary of the activities promoted and organized by the ALICE Diversity Office since its inception. Moreover, the time evolution of these statistics, including data from our most recent survey, will be shown, particularly focusing on the level and amount of responsibilities held by ALICE members among differing demographic groups.
LHCb is a collaboration of about 1500 members from 87 institutions based in 19 countries, and representing many more nationalities. We aim to work together on experimental high energy physics, and to do so in the best and most collaborative conditions. The Early Career, Gender & Diversity (ECGD) office exists to support this goal, and in particular has a mandate to work towards gender equality, and support diversity in the collaboration. The office also supports early-career physicists, and since 2020 it includes two early-career representatives. The ECGD officers advise the LHCb management and act as LHCb contacts for all matters related to ECGD. They are available for listening to and advising - in a confidential manner - colleagues who have witnessed or have been subject to harassment, discrimination or other inappropriate behaviour. They help raise awareness in the collaboration for topics related to ECGD. In this talk we briefly introduce the ECGD office, discuss what we have learnt from analysis of the collaboration's demographics, share the conclusions from discussions on different topics debated in dedicated collaboration-meeting sessions, and the early career initiatives that are carried out.
INFN (Istituto Nazionale di Fisica Nucleare) is the Italian research agency dedicated to the study of the fundamental constituents of matter and the laws that govern them. INFN employs 2000 staff (scientists, technicians and people in administration) and about 4000 associate people.
In the last 20 years, the gender parity has been monitored and affirmative actions have been proposed. The statistics and the actions will be presented.
We review recent CMS results on hard probes of heavy ion collisions, including jet and electroweak boson production.
Jets are generated in hard interactions in high energy nuclear collisions, propagating through the quark-gluon plasma (QGP) as the jet shower evolves. The interaction of jet shower components with the QGP, known as jet quenching, generates several observable phenomena that provide incisive probes of the structure and dynamics of the QGP. In particular, measurements of the medium-induced modification of jet substructure observables may be sensitive to effects such as color coherence or differences in quark and gluon energy loss due to their different Casimir factors. By utilizing jet grooming techniques to select particular regions of phase space, we can further focus on the most pertinent hard splittings. ALICE is particularly well suited for such substructure measurements due to its precise charged-particle tracking, which enables high-efficiency measurements of jets down to low $p_{\rm T}$ and of narrow splittings. In this talk, we report several recent jet substructure measurements in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, both ungroomed and groomed with the Soft Drop and Dynamical Grooming algorithms. We report measurements of the groomed jet radius, $\Theta_g\equiv R_g/R$, the groomed jet momentum fraction, $z_g$, and the transverse momentum of the groomed splitting, $k_{\rm T,g}$. These measurements show direct evidence of the modification of the angular structure of jets in the QGP, and provide new constraints on the search for large-angle scattering of jets off of quasi-particles in the QGP. New measurements of sub-jet fragmentation, generalized jet angularities, and variation in jet-axis on the basis of jet definition, will also be presented, providing insight into the angular and momentum structure of modified jets. Comparisons with model calculations will also be discussed.
The energy loss of jets (jet quenching) is one of the most important signatures of the deconfined state of quarks and gluons (quark-gluon plasma) created in Pb-Pb collisions at the LHC. The measurement of jets recoiling from a trigger hadron uniquely enables the exploration of medium-induced modification of jet production. Jet deflection via multiple soft scatterings with the medium constituents may result in a broadening of the overall azimuthal correlation between the trigger hadron and the recoiling jet. In addition, the tail of this azimuthal correlation is sensitive to single-hard Moliรจre scatterings off quasi-particles in the medium. The overall yield and R-dependence of the recoil jets also offers important information about jet energy loss and intra-jet broadening.
This contribution presents a measurement of charged-particle jets recoiling from a trigger hadron in pp and Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 5.02 TeV. Techniques are employed which allow for a precise data-driven subtraction of the large uncorrelated background contaminating the measurement in Pb-Pb collisions, enabling the exploration of medium-induced modification of jet production and acoplanarity over a wide phase space, including the low jet $p_\mathrm{T}$ region for large jet resolution parameter $R$.
We report high-statistics measurements of semi-inclusive distributions of charged jets recoiling from high-$E_{\text{T}}$ direct photon ($\gamma_{\text{dir}}$) and $\pi^{0}$ triggers in $p$+$p$ and central Au+Au collisions at $\sqrt{s_{NN}} = 200$ GeV. In a semi-inclusive approach, event bias is induced solely by the choice of trigger; separately utilizing $\gamma_{\text{dir}}$ and $\pi^{0}$ triggers therefore provides direct comparison of effects due to jet quenching - the suppression of energetic partons due to the energy loss in the Quark-Gluon Plasma (QGP) - for jet populations with different quark/gluon fractions and different in-medium path length distributions. Jets are reconstructed from charged particles using the anti-$\text{k}_{\text{T}}$ algorithm with jet resolution parameters $R_{\text{jet}} = 0.2$ and 0.5. The large uncorrelated background in central Au+Au collisions is removed statistically using a mixed event technique. This enables a jet measurement with well-controlled systematic uncertainties extending to low jet transverse momentum ($p_{\text{T}}$) and large $R_{\text{jet}}$, which are of particular importance in searching for large-angle jet scattering. We report recoil jet yield and trigger-jet acoplanarity distributions for jets with $p_{\text{T}} > 5$ GeV/$c$. The comparison of recoil yields in Au+Au and $p$+$p$ collisions at fixed $R_{\text{jet}}$ probes energy loss in heavy-ion collisions, while the comparison of recoil yields for different $R_{\text{jet}}$ in Au+Au and $p$+$p$ collisions probes intra-jet broadening due to jet quenching. The modification of trigger-jet acoplanarity distributions in central Au+Au collisions relative to $p$+$p$ collisions is sensitive to QGP transport parameters, and can be used to search for evidence of large-angle scattering of jets off of quasi-particles in the QGP. The measured recoil yields and acoplanarity distributions are compared to theoretical calculations.
We compute the in-medium jet broadening to leading order in energy in the opacity expansion. At leading order in $\alpha_s$ the elastic energy loss gives a jet broadening that grows with $\ln E$. The next-to-leading order in $\alpha_s$ result is a jet narrowing, due to destructive LPM interference effects, that grows with $\ln^2 E$. We find that in the opacity expansion the jet broadening asymptotics are---unlike for the mean energy loss---extremely sensitive to the correct treatment of the finite kinematics of the problem; integrating over all emitted gluon transverse momenta leads to a prediction of jet broadening rather than narrowing. We compare the asymptotics from the opacity expansion to a recent twist-4 derivation and find a qualitative disagreement: the twist-4 derivation predicts a jet broadening rather than a narrowing. Comparison with current jet measurements cannot distinguish between the broadening or narrowing predictions. We comment on the origin of the difference between the opacity expansion and twist-4 results.
Azimuthal angle ($\Delta\phi$) and transverse momentum ($p_\mathrm{T}$) correlations of isolated photons and associated jets, which are sensitive to medium induced parton momentum broadening, are reported for the first time with the latest high statistics pp and PbPb data recorded with the CMS detector at $\sqrt{s_{_{\mathrm{NN}}}} =$ 5.02 TeV. The fully corrected photon+jet azimuthal correlation and $p_\mathrm{T}$ imbalance in PbPb collisions are studied as a function of collision centrality and photon $p_\mathrm{T}$. In addition, a novel measurement of the decorrelation of jet axes calculated with the energy weight and the winner-take-all schemes ($\delta_{jj}$) is reported for the first time. This new observable is insensitive to the initial state radiation which significantly smears the photon+jet azimuthal correlation. A significant modification of $\delta_{jj}$ is reported, which signals a direct detection of in-medium momentum broadening of the leading parton inside the jet. Furthermore, the transverse energy spectra and nuclear modification factors ($R_\mathrm{AA}$) of isolated photons will be discussed.
The FCC-ee offers powerful opportunities to determine the Higgs boson parameters, exploiting over $10^{6}$ e+e- โ ZH events and almost $10^{5}$ WWโH events at centre-of-mass energies around 240 and 365 GeV. This contribution spotlights the important measurements of the ZH production cross section and of the Higgs boson mass. The measurement of the total ZH cross section is an essential input to the absolute determination of the HZZ coupling -- a "standard candle" that can be used by all other measurements, including those made at hadron colliders -- at the permil level. A combination of the measured cross sections at the two different centre-of-mass energies further provides the first evidence for the trilinear Higgs self-coupling, and possibly its first observation if the cross-section measurement can be made accurate enough. The determination of the Higgs boson mass with a precision significantly better than the Higgs boson width (4.1 MeV in the Standard Model) is a prerequisite to either constrain or measure the electron Yukawa coupling via direct e+e- โ H production at โs=125 GeV. Approaching the statistical limit of 0.1% and O(1) MeV on the ZH cross section and the Higgs boson mass, respectively, sets highly demanding requirements on accelerator operation (ZH threshold scan, centre-of-mass energy measurement), detector design (lepton momentum resolution, hadronic final state reconstruction performance), theoretical calculations, and analysis techniques (efficiency and purity optimization with modern tools, constrained kinematic fits, control of systematic uncertainties).
The Future Circular Collider (FCC) is at the heart of the vision of the EU Strategy for Particle Physics, and the highest priority for Europe and its international partners. A technical and financial feasibility study of the 100-km infrastructure and of the colliders that would be installed in it is underway. The physics programme is based on the sequence of a 90-400 GeV high-luminosity and high-precision e+e- collider, FCC-ee, followed by a 100 TeV hadron collider FCC-hh including heavy ion and optionally e-p collisions. A main objective of the FCC is a full program of exploration of the properties of the Higgs boson, making full use of the complementarity of the various machines.
The FCC-ee fully uses the well-known centre-of-mass energy by using Z tagging to perform a model independent determination of the ZH cross-section at 240 GeV. This will serve a fixed candle for Higgs coupling measurements of the total width and of its ZZ, WW, bb, cc, tautau, gg decays and couplings at the FCC-ee, and similarly for the more rare gamma-gamma, mu+mu-, Zgamma final states at the FCC-hh. The measurements of top quark properties at FCC-ee will be instrumental in the measurements of the ttH coupling t the FCC-hh. The Higgs self-coupling which lays at the root of the Electroweak symmetry breaking will be determined from loop effects in the ZH cross-section at different energies, and in a different and complementary way from HH production at FCC-hh. Finally, the FCC-ee offers a unique opportunity to measure the electron Yukawa coupling by searching for the s-channel Higgs production. Thanks to the huge rate of Higgs production at FCC-hh, the invisible decay width will be determined at the sub permil level. Interestingly, the existence of a Yukawa coupling to the neutrinos would result in the existence of right-handed neutrinos that can be searched for extremely efficiently in Z decays at FCC-ee.
Higgs production cross sections at LHeC (FCC-he) energies are as large (larger than) those at future $๐โ๐ป$ $๐^+๐^โ$ colliders. This provides alternative and complementary ways to obtain very precise measurements of the Higgs couplings, primarily from luminous, charged current DIS. Recent results for LHeC and FCC-he are shown and their combination is presented with $๐๐$ (HL-LHC) cross sections leading to precision comparable to the most promising $๐^+๐^โ$ colliders. We will show the results for the determination of several signal strengths and couplings to quarks, leptons and EW bosons, and discuss the possibilities for measuring the coupling to top quarks and its CP phase, and the search for invisible decays.
Reference: P. Agostini et al. (LHeC Study Group), The Large Hadron-Electron Collider at the HL-LHC, J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
Muon collisions at multi-TeV center of mass energies are ideal for studying Higgs boson properties. At these energies the production rates will allow precise measurements of its couplings to fermions and bosons. In addition the double Higgs boson production rate could be sufficiently high to directly measure the parameters of trilinear self-couplings, giving access to the determination of the Higgs potential.
This contribution aims to give an overview of the results that have been obtained so far on Higgs couplings by studying the $\mu^+\mu^- \rightarrow H(b\bar{b}) \nu \bar{\nu}$, $\mu^+\mu^- \rightarrow H(WW^*) \nu \bar{\nu}$ and $\mu^+\mu^- \rightarrow H(b\bar{b})H(b\bar{b}) \nu \bar{\nu}$ processes. All the studies have been performed with a detailed simulation of the signal and physics background samples and by evaluating the effects of the beam-induced background on the detector performance.
Evaluations on Higgs boson couplings sensitivities and results on the uncertainty on double Higgs production cross section, together with the trilinear self-coupling, will be discussed at a center of mass energy of 3 TeV.
We explore the sensitivity of directly testing the muon-Higgs coupling at a high-energy muon collider. This is strongly motivated if there exists new physics that is not aligned with the Standard Model Yukawa interactions which are responsible for the fermion mass generation. We illustrate a few such examples for physics beyond the Standard Model. With the accidentally small value of the muon Yukawa coupling and its subtle role in the high-energy production of multiple (vector and Higgs) bosons, we show that it is possible to measure the muon-Higgs coupling to an accuracy of ten percent for a 10 TeV muon collider and a few percent for a 30 TeV machine by utilizing the three boson production, potentially sensitive to a new physics scale about ฮ โผ 30 โ 100 TeV.
At the latest European strategy update in 2020 it has been highlighted that
the next highest-priority collider should be an e+eโ Higgs factory with a strong
focus on precision physics. To utilise the clean event environments, a new generation of collider detector technolgies is being developed along with novel algorithms to push event reconstruction to its full potential.
This talk reviews key Higgs physics measurements and discusses in how far their prospects would benefit from advances in high-level reconstruction and improved detector capabilities. For instance the selection of Higgs and double-Higgs production modes, like ZH vs. ZZ/WW and ZHH vs. ZZH, in fully hadronic decay channels does not only profit from the excellent jet energy resolution of particle flow but also from a full kinematic fit reconstruction. This not only uses the known centre-of-mass energy at lepton colliders, but also allows to take into account invisible jet constituents. Second-generation decays of the Higgs boson are rare, suffer from a huge background at the LHC and are difficult to tag. Novel approaches to particle identification and in particular reconstruction of charged kaons and other strange hadrons substantially enhance charm- and strange-tagging, enabling among others a drastic improvement on the limit of the strange-Yukawa coupling.
With technically mature design and well understood physics program, ILC is realistic option for realization of a Higgs factory. With a unique physics reach of a linear collider, ILC meaningfully complement projections for HL-LHC. Energy staged data collection, employment of beam polarization and capability to reach a TeV center-of-mass energy enable unique precision to probe BSM models above the discovery limit as well as to measure the Higgs self-coupling. These and other highlights from the Higgs physics program will be discussed.
In this talk we discuss the results of a full-simulation study exploring CP violation in Higgs production through ZZ fusion. The study is performed for CLIC running at a centre-of-mass energy of 1.4TeV, assuming that the Higgs boson is realized as a mixture of scalar and pseudoscalar states. By measuring the electron and positron in the final state, the CP-violating mixing angle $\Psi_{\mathrm{CP}}$ can be probed at the HZZ production vertex, and the statistical precision on its measurement determined. This method is complementary to other studies.
The Daya Bay experiment has collected from December 2011 to December 2020 a record sample of electron antineutrinos consisting of more than 6 million events. The reactor antineutrinos are detected via inverse beta decay and tagged through neutron capture on gadolinium or hydrogen using eight functionally identical detectors located in three experimental halls at different baselines from six nuclear reactors. The relative measurement of the observed antineutrino rate and spectral shape at the different detectors enables significant suppression of key systematic uncertainties. The most recent measurement of the $\sin^2 2\theta_{13}$ mixing amplitude and the $\Delta m^2_{32}$ mass splitting using the latest available dataset of antineutino events will be presented in this talk
This talk presents the latest results of the reactor antineutrino flux and spectrum measurement at Daya Bay. The antineutrinos were generated by six nuclear reactors with 2.9 GW thermal power each and were detected by eight antineutrino detectors deployed in two near and one far underground experimental halls. Deviations in the measured flux and positron prompt energy spectrum were found compared to the theoretical predictions. The $^{235}$U and the $^{239}$Pu fluxes and spectra were obtained by fitting the flux and spectrum evolution as a function of fission fractions. After that, the reactor antineutrino spectra of IBD reactions were unfolded to provide a data-based prediction for other reactor antineutrino experiments.
In 2021, a joint determination of the reactor antineutrino spectra resulting from the fission of $^{235}$U and $^{239}$Pu was carried out by the Daya Bay and PROSPECT Collaborations. The precision of the derived $^{235}$U spectrum was improved beyond that individually observed by either experiment, and the degeneracy between the derived $^{235}$U and $^{239}$Pu spectra was reduced below that from Daya Bay alone. This is the first measurement of the $^{235}$U and $^{239}$Pu spectra based on a combination of experiments at low- and highly enriched uranium reactors.
Recently, high-energy reactor antineutrinos above 10 MeV were firstly observed at Daya Bay. A multivariant analysis was applied to statistically distinguish 2500 signal events from background events in nearly 9000 inverse beta-decay candidates in the prompt energy region of 8-12 MeV, rejecting the hypothesis of no reactor antineutrinos of energy above 10 MeV with a significance of $6.2\sigma$. This first direct measurement of high-energy reactor antineutrinos provides a unique data-based reference for other experiments and theoreti
We study the status of the reactor antineutrino anomaly in light of recent reactor flux models obtained with the conversion and summation methods. We present a new improved calculation of the IBD yields of the standard Huber-Mueller (HM) model and those of the new models. We show that the reactor rates and the fuel evolution data are consistent with the predictions of the Kurchatov Institute (KI) conversion model and with those of the Estienne-Fallot (EF) summation model, leading to a plausible robust demise of the reactor antineutrino anomaly. We show that the results of several goodness of fit tests favor the KI and EF models over other models that we considered. We also discuss the implications of the new reactor flux models for short-baseline neutrino oscillations due to active-sterile mixing. We show that reactor data give upper bounds on active-sterile neutrino mixing that are not very different for the reactor flux models under consideration and are in tension with the large mixing required by the Gallium anomaly that has been refreshed by the recent results of the BEST experiment. The data-driven isotopic IBD yields can also be obtained from global fits of the experimental rate and evolution data, which provide an anomaly-free model for the prediction of future experiments.
This presentation is based on the following two publications:
[1] C. Giunti, Y. F. Li, C. A. Ternes, and Z. Xin, Reactor antineutrino anomaly in
light of recent flux model refinements, (2021), arXiv:2110.06820.
[2] Y. F. Li and Z. Xin, Model-Independent Determination of Isotopic Cross Sections
per Fission for Reactor Antineutrinos, (2021), arXiv:2112.11386.
The 20 kton liquild scintillator detector of the Jiangmen Underground Neutrino Observatory (JUNO) is under construction in an underground laboratory in South China. It is expected to start data-taking in 2023. With an excellent energy resolution and large detector volume and excellent background control, JUNO is expected to determine the neutrino mass ordering, and provide precise measurements on the neutrino oscillation parameters $sin^{2}\theta_{12}$, $\Delta m_{21}^{2}$, and |$\Delta m_{32}^{2}$|. As a multi-purpose neutrino observatory, JUNO also has world competitive potential on the searches for diffuse supernova neutrino background (DSNB), the core-collapse supernova (CCSN) neutrinos, solar neutrino, atmospheric neutrinos, geo-neutrinos, nucleon rare decays and other new physics beyond the Standard Model. In this talk, I will present the latest evaluations on the prospects of JUNOโs physics goal.
JUNO (Jiangmen Underground Neutrino Observatory) is a large liquid scintillator detector currently under construction in the underground laboratory of Kaiping (Guangdong, China) and expected to be completed in 2023.
The JUNO central detector will contain a 35.4 m diameter acrylic vessel filled with 20-kt of LAB-based scintillator, and submerged in a water pool equipped with PMTs to act as Cherenkov detector. The scintillation light will be read-out by 17612 20'' PMTs and 25600 3'' PMTs, reaching a geometric coverage higher than 75%. On top of the main detector, a plastic scintillator tracker will complete the JUNO veto system for cosmic muons.
JUNO's ambitious design primarily aims to the determination of the neutrino mass ordering at high statistical significance ($3-4\,\sigma$ in about 6 years of data taking), by measuring the oscillation pattern of electron antineutrinos generated by two nuclear power plants, on a $\sim53$ km baseline from the experimental site. JUNO will target an unprecedented 3% energy resolution at 1 MeV scale, thus it will be a unique facility for particle and astroparticle physics. Besides its main goal, JUNO indeed aspires to the sub-percent determination of the neutrino oscillation parameters ($\sin^2 \theta_{12}$, $\Delta \mathrm{m}^2_{21}$, and $\Delta \mathrm{m}^2_{31}$) as well as to the measurement of atmospheric neutrinos, to solar neutrino precision spectroscopy, and to the detection of low-energy neutrinos coming from supernovae and geo-neutrinos.
In this talk, the JUNO detector design and the status of the experiment construction will be presented.
In this talk, the program of the Reactor Neutrino Experiments of Turkey (RNET) will be presented.
This program includes a small portable Water-based Liquid Scintillator Detector (WbLS) to detect neutrinos from the Akkuyu nuclear power plant, planned begin operating in 2023. The small near-field detector will weigh about 2-3 tons and will be placed less than 100 meters from the reactor cores. The RNET program also includes a medium-size, 30-ton WbLS detector, which will be placed 1-2 km away from the reactor cores and will be used as a far detector. Both detectors and their response to neutrino interactions were simulated using a GEANT4-based RAT-PAC simulation package. Here, we will share the technical and physical details of both detectors, and discuss the ongoing R\&D effort for neutrino studies in Turkey.
Effective field theories of QCD, such as soft collinear effective theory with Glauber gluons, have led to important advances in understanding of many-body nuclear effects. We provide first applications to QED processes. We study the exchange of photons between charged particles and the nuclear medium for (anti)neutrino-, electron-, and muon-induced reactions inside a large nucleus. We provide analytical expressions for the distortion of (anti)neutrino-nucleus and charged lepton-nucleus cross sections and estimate the QED-medium effects on the example of elastic lepton-nucleon reactions in kinematics of modern and future experiments. We find new percent-level effects, which were never accounted for in either (anti)neutrino-nucleus or electron-nucleus scattering. We discuss implications to the extraction of the proton charge radius.
With the KATRIN experiment, the determination of the absolute neutrino mass scale down to cosmologically favored values has come into reach. We show that this measurement provides the missing link between the Standard Model and the dark sector in scotogenic models, where the suppression of the neutrino masses is economically explained by their only indirect coupling to the Higgs field. We determine the linear relation between the electron neutrino mass and the scalar coupling $\lambda_5$ associated with the dark neutral scalar mass splitting to be $\lambda_5=3.1\times10^{โ9}$ m$_{\nu_e}$/eV. This relation then induces correlations among the DM and new scalar masses and their Yukawa couplings. Together, KATRIN and future lepton flavor violation experiments can then probe the fermion DM parameter space, irrespective of the neutrino mass hierarchy and CP phase.
Precision luminosity measurements are an essential ingredient to cross section measurements at the LHC, needed to determine fundamental parameters of the standard model and to constrain or discover beyond-the-standard-model phenomena. The luminosity measurement of the CMS detector at the LHC, using proton-proton collisions at 13 TeV during the 2015-2018 data-taking period (โRun 2โ), is reported. The absolute luminosity scale is obtained with beam-separation (โvan der Meerโ) scans, and several systematic uncertainty sources are studied. Additional contributions to the total uncertainty in the integrated luminosity originate from the linearity and stability of the detectors used in the luminosity measurement throughout the data-taking period. A novel method to improve the luminosity integration with the physics process Z โ ฮผ+ฮผ- is explored.
The high-luminosity upgrade of the LHC (HL-LHC) is foreseen to reach an instantaneous luminosity a factor of five to seven times the nominal LHC design value. The resulting, unprecedented requirements for background monitoring and luminosity measurement create the need for new high-precision instrumentation at CMS, using radiation-hard detector technologies. This contribution introduces the instrumentation for bunch-by-bunch online luminosity and beam-induced background measurements based on various detector technologies. The CMS Tracker Endcap Pixel Detector (TEPX) will be adapted for precision luminometry by implementing dedicated triggering and readout systems, including a real-time clustering algorithm on an FPGA. The innermost ring of the TEPX last layer (D4R1) will be operated independently from the rest of TEPX enabling beam monitoring during the LHC ramp and during unqualified beam conditions with a dedicated timing and trigger infrastructure. A key component of the proposed system is a stand-alone luminometer, the Fast Beam Condition Monitor (FBCM), which is fully independent from the central trigger and data acquisition services and able to operate during all times with an asynchronous readout. FBCM is foreseen to utilize silicon-pad sensors with a few ns timing resolution enabling also the measurement of beam induced background. The potential for the exploitation of other CMS subsystems, the outer tracker, the hadron forward calorimeter, the barrel muon detectors and the 40 MHz L1 trigger scouting system using a common histogramming firmware will also be discussed.
A precise measurement of the luminosity is a crucial input for many ATLAS physics analyses, and represents the leading uncertainty for W, Z and top cross-section measurements. The final ATLAS luminosity determination for the Run-2 13 TeV dataset is described, based on van der Meer scans during dedicated running periods each year to set the absolute scale, which is then extrapolated to physics running conditions using complementary measurements from the ATLAS tracker and calorimeter subsystems. Nearly all aspects of the analysis have been revisited since the preliminary Run-2 calibration, leading to one of the most precise luminosity calibrations at a hadron collider to date.
The ATLAS physics program at High Luminosity LHC (HL-LHC) calls for a precision in the luminosity measurement of 1%. A larger uncertainty would represent the dominant systematic error in precision measurements, including the Higgs sector. To fulfill such requirement in an environment characterized by up to 140 simultaneous interactions per crossing (200 in the ultimate scenario), ATLAS will feature several luminosity detectors. At least some of them must be both calibratable in the van der Meer scans at low luminosity and able to measure up to its highest values. LUCID-3, the upgrade of the present official ATLAS luminometer (LUCID-2), will fulfill such a condition. In this presentation, two options under study are presented. The first one is based on photomultipliers (PMT) located at a larger distance from the beam-pipe with respect to LUCID-2 and with a smaller active area. These solutions reduce the acceptance of the detector and avoid the saturation of the luminosity algorithms. The second option is based on optical fibers acting as both Cherenkov radiators and light-guides to route the produced light to the readout PMTs. Both detectors will be monitored continuously with a 207Bi radioactive source deposited on the PMT window and, in the case of the fibers, by additional LED light injected simultaneously on the PMT and at the end of the fiber, to monitor possible ageing of the fiber due to radiation. The prototypes of both options, installed in the detector for the upcoming data taking, are also discussed, together with the first results obtained in Run-3.
The LHCb detector optimised its performance in Run 1 and 2 by stabilising the instantaneous luminosity during a fill. This is achieved by tuning the distance between the two colliding beams according to the measurement of instantaneous luminosity from hardware-based trigger counters. The upgraded LHCb detector operates at fivefold instantaneous luminosity compared to the previous runs, and it has a fully software-based trigger. Consequently, a new approach to the luminosity measurement is adopted. New counters, with particular attention to maximum stability in time, and a new dedicated detector have been introduced for Run 3. Additionally, in order to verify linearity from calibration to data taking conditions, per-fill emittance scans are performed. In this talk an overview of the newly implemented methods for luminosity measurement is presented, as well as results from the first weeks of data taking.
Cross section measurements in hadronic collisions are crucial to the physics program of ALICE. These measurements require a precise knowledge of the luminosity delivered by the LHC. Luminosity determination in ALICE is based on visible cross sections measured in dedicated calibration sessions, the van der Meer (vdM) scans.
This contribution presents a review of the ALICE luminosity determination methodology and results during the LHC Run 2 for pp and Pb-Pb collisions where ALICE used three luminometers: the T0 and V0 detectors, and the Zero Degree Calorimeter. By combining information from the ALICE detectors and the LHC instrumentation, an uncertainty of 1.6% (2.2%) on the luminosity measurement for the full sample is achieved in pp (Pb-Pb) collisions.
The Cabibbo-Kobayashi-Maskawa (CKM) element $V_{ub}$ is an important input parameter for the theoretical predictions of many observables in the flavor sector as it is responsible for the CP violating phase within the Standard Model. There exists a long standing tension between the tree-level determinations using the inclusive $B \to X_u l \nu$ decays (where $X_u$ refers to sum over all final state hadrons containing an up quark) and exclusive decays like $B \rightarrow \pi l \nu$, known as the inclusive-exclusive puzzle. We relook into the precision extraction of the CKM matrix element $|V_{ub}|$ from the tree level semileptonic $b \to u l \nu_l$ ($\ell = e, \mu$) decays, incorporating all the available inputs (data and theory) on the $B \rightarrow \pi l \nu$ ($\ell = e, \mu$) decays including the newly available inputs on the form-factors from light cone sum rule (LCSR) and Lattice QCD (LQCD) approach. We have reproduced and compared the results with the procedure taken up by the Heavy Flavor Averaging Group (HFLAV), while commenting on the effect of outliers on the fits. After removing the outliers and creating a comparable group of data-sets, we mention a few scenarios in the extraction of $|V_{ub}|$. Our best results for $|V_{ub}|^{exc.}$ are $(3.94 \pm 0.14)\times 10^{-3}$ and $(3.93_{-0.15}^{+0.14})\times 10^{-3}$ in frequentist and Bayesian approaches, respectively, which are consistent with the most recent estimate for $|V_{ub}|^{inc.}$ from Belle-II within 1 $\sigma$ confidence interval.
Semileptonic $B$ decays allow to determine the magnitudes of the CKM matrix parameters $|V_{cb}|$ and $|V_{ub}|$, two fundamental parameters of the standard model flavor sector. At Belle II these measurements use both exclusive decays such as $B\to D^*\ell\nu$ and $B\to\pi\ell\nu$, or inclusive $X_c\ell\nu$ or $X_u\ell\nu$ final states restricted in phase space. The low-background collision environment along with the possibility of partially or fully reconstructing one of the two $B$ mesons in the event offer high precision. Recent results on $|V_{cb}|$ and $|V_{ub}|$, along with a novel measurement of lepton-$q^2$ moments are presented, along with future perspectives.
Semileptonic heavy-to-light $B$ decays are very intriguing transitions mainly because a long-standing tension affects the inclusive and the exclusive determinations of the CKM matrix element $\vert V_{ub} \vert$. Our goal is to re-examine the $b \to u$ quark transitions through the Dispersive Matrix (DM) approach. The DM method is based on the non-perturbative determination of the dispersive bounds due to unitarity and analyticity and determines in a model-independent way the hadronic Form Factors (FFs) in the full kinematical range, starting from existing Lattice QCD data at large momentum transfer. By comparing the DM bands of the FFs with the experiments, we obtain $\vert V_{ub}\vert = (3.62 \pm 0.47) \cdot 10^{-3}$ from $B \to \pi$ decays and $\vert V_{ub}\vert = (3.77 \pm 0.48) \cdot 10^{-3}$ from $B_s \to K$ processes, which after averaging yield $\vert V_{ub}\vert = (3.69 \pm 0.34) \cdot 10^{-3}$. We also present a new strategy to extract a more precise value of $\vert V_{ub}\vert$ from $B \to \pi$ decays, based on the application of the unitarity bounds directly to experimental data. In this case, we get $\vert V_{ub}\vert = (3.88 \pm 0.32) \cdot 10^{-3}$. After averaging with the result from $B_s \to K$ decays our final esclusive value for $\vert V_{ub} \vert$ is $\vert V_{ub} \vert_{excl} = (3.85 \pm 0.27) \cdot 10^{-3}$. All estimates are compatible with the most recent inclusive value $\vert V_{ub} \vert_{incl} = (4.13 \pm 0.26) \cdot 10^{-3}$ within the $1\sigma$ level. Finally, we address the computation of new fully-theoretical estimates of the LFU observables, the forward-backward asymmetry and the lepton polarization asymmetry for semileptonic $B \to \pi$ and $B_s \to K$ decays.
Though the Belle experiment has stopped data taking more than a decade ago, new results on semileptonic B meson decays are still being obtained. This is in part due to new experimental tools elaborated for Belle II applied to the Belle data set, such as the FEI (Full Event Interpretation) hadronic and semileptonic tag which enables new, more precise measurements of $B \to D^{*}\ell\nu$ and $B \to D^{(*)}\pi(\pi)\ell\nu$. Improved analysis methods, such as data-driven background modelling and the determination of the CKM magnitude ratio |Vub|/|Vcb| allow to cancel experimental and theoretical systematics. The talk also covers other results on semileptonic B decay. All results in this talk are based on the full data set collected by the Belle experiment at the KEKB asymmetric-energy $e^+e^-$ collider.
Semileptonic b-hadron decays to a final state with a heavy lepton are sensitive to new couplings, such as those generated by charged Higgses or Leptoquarks. The B-Factories and LHCb have performed measurements of these decays, using different approaches and techniques. A global average of measurements of ratios of branching fractions to final states with taus or light leptons shows a discrepancy with the Standard Model expectations above 3 standard deviations. A measurement of the combined ratios BF(B->D0 tau nu)/BF(B->D0 mu nu) and BF(B->D tau nu)/BF(B->Dmu nu) using 3/fb collected by LHCb in Run1, is presented, as well as a first measurement of the ratio R(Lc) = BF(Lb -> Lc tau nu) / BF(Lc -> Lc mu nu) using the same dataset.
We present an application of the unitarity-based dispersion matrix (DM) approach of Ref. [1] to the extraction of the CKM matrix element $|V_{cb}|$ from the experimental data on the exclusive semileptonic $B_{(s)} \to D_{(s)}^{(*)} \ell \nu_\ell$ decays [2-4]. The DM method allows to achieve a non-perturbative, model-independent determination of the momentum dependence of the semileptonic form factors. Starting from lattice results available at large values of the 4-momentum transfer and implementing non-perturbative unitarity bounds [5], the behaviour of the form factors in their whole kinematical range is obtained without introducing any explicit parameterization of their momentum dependence. We consider the four exclusive semileptonic $B_{(s)} \to D_{(s)}^{(*)} \ell \nu_\ell$ decays and extract $|V_{cb}|$ from the experimental data for each transition [2-4]. The average over the four channels is $|V_{cb}| = (41.4 \pm 0.8) \cdot 10^{-3} $, which is compatible with the latest inclusive determination at $1\sigma$ level. We address also the issue of Lepton Flavour Universality by computing pure theoretical estimates of the $\tau/\ell$ ratios of the branching fractions for each channel. In the case of a light spectator quark we obtain $R(D^*) = 0.275(8)$ and $R(D) = 0.296(8)$, which are compatible with the corresponding experimental values within $1.3\sigma$. In the case of a strange spectator quark we obtain $\textit{R}(D_s^*) =0.2497(60)$ and $\textit{R}(D_s) = 0.298(5)$. The different values for $R(D_s^*)$ and $R(D^*)$ may reflect $SU(3)_F$ symmetry breaking effects, which seem to be present in some of the lattice form factors, especially at large values of the recoil.
[1] M. Di Carlo, G. Martinelli, M. Naviglio, F. Sanfilippo, S. Simula, and L. Vittorio, Unitarity Bounds for Semileptonic Decays in Lattice QCD, Phys. Rev. D 104, 054502 (2021), arXiv:2105.02497 [hep-lat].
[2] Martinelli, G. and Simula, S. and Vittorio, L., $\vert V_{cb} \vert$ and $R(D^{(*)})$ using lattice QCD and unitarity, arXiv:2105.08674[hep-ph].
[3] Martinelli, G. and Simula, S. and Vittorio, L., Exclusive determinations of $\vert V_{cb} \vert$ and $R(D^{*})$ through unitarity, arXiv:2109.15248 [hep-ph].
[4] G. Martinelli, M. Naviglio, S. Simula, and L. Vittorio, $|V_{cb}|$, Lepton Flavour Universality and $SU(3)_F$ symmetry breaking in semileptonic $B_s \to D_s^{(*)} \ell \nu_\ell$ decays through unitarity and lattice QCD, arxiv:2204.05925 [hep-lat].
[5] G.Martinelli, S.Simula and L.Vittorio, Constraints for the semileptonic $B\rightarrow D^{(*)}$ form factors from lattice QCD simulations of two-point correlation functions, Phys. Rev. D 104 (2021) no.9, 094512 doi:10.1103/PhysRevD.104.094512 [arXiv:2105.07851 [hep-lat]].
Semileptonic decays of $B$ mesons involving the high-mass $\tau$ lepton are sensitive probes for physics beyond the Standard Model. The relative rates of branching fractions $R(D) = \mathcal{B}(B \to D \tau \nu) / \mathcal{B}(B \to D \ell\nu)$ and $R(D^*) = \mathcal{B}(B \to D^* \tau \nu) / \mathcal{B}(B \to D^* \ell \nu)$ $(\ell=e,\mu)$ are independent of the CKM element $|V_{cb}|$ and of other theoretical uncertainties. Based on the 433 $\text{fb}^{-1}$ data collected at the $\Upsilon(4S)$ resonance by the $BABAR$ detector at the PEP-II collider located at the SLAC National Accelerator Laboratory, we report a measurement of $R(D)$ and $R(D^{*})$ using semileptonic $B$-tagging and leptonic $\tau$ decays.
We present a new approach to jet definition alternative to clustering methods, such as the anti-kT scheme, that exploit kinematic data directly. Instead the new method uses kinematic information to represent the particles in a multidimensional space, as in spectral clustering. After confirming its Infra-Red (IR) safety, we compare its performance in analysing $gg \rightarrow H_{125GeV} \rightarrow H_{40GeV} H_{40GeV} \rightarrow \bar{b}b\bar{b}b $, $gg \rightarrow H_{500GeV} \rightarrow H_{125GeV}H_{125GeV} \rightarrow \bar{b}b\bar{b}b$ and $gg, q\bar{q} \rightarrow t\bar{t} โ \bar{b}b W^+ W^โ \rightarrow \bar{b}bjjlฮฝ_l$ events from Monte Carlo (MC) samples, specifically, in reconstructing the relevant final states, to that of the anti-kT algorithm. Finally, we show that the results for spectral clustering are obtained without any change in the parameter settings of the algorithm, unlike the anti-kT case, which requires the cone size to be adjusted to the physics process under study.
Multiplicity is one of the simplest experimental observables in collider events, whose importance stretches from calibration to advanced tagging techniques. We introduce a new (sub)jet multiplicity, the Lund multiplicity, for lepton and hadron collisions. It probes the full multiple branching structure of QCD and is calculable in perturbation theory. We introduce a formalism allowing us to calculate the average Lund and Cambridge multiplicities to all orders, reaching next-to-next-to double logarithmic (NNDL) accuracy in $e^+e^-$ collisions, an order higher than the existing state-of-the-art, and next-to-double logarithmic accuracy (NDL) in hadronic collisions. Matching our resummed calculation to the NLO result, we find a reduction of theoretical uncertainties by up to 50% compared to the previous state-of-the-art. Adding hadronisation corrections obtained through Monte Carlo simulations, we also show a good agreement with existing Cambridge multiplicity data.
Discriminating quark and gluon jets is a long-standing topic in collider phenomenology. In this paper, we address this question using the Lund jet plane substructure technique introduced in recent years. We present two complementary approaches: one where the quark/gluon likelihood ratio is computed analytically, to single-logarithmic accuracy, in perturbative QCD, and one where the Lund declusterings are used to train a neural network. For both approaches, we either consider only the primary Lund plane or the full clustering tree. The analytic and machine-learning discriminants are shown to be equivalent on a toy event sample resumming exactly leading collinear single logarithms, where the analytic calculation corresponds to the exact likelihood ratio. On a full Monte Carlo event sample, both approaches show a good discriminating power, with the machine-learning models usually being superior. We carry on a study in the asymptotic limit of large logarithm, allowing us to gain confidence that this superior performance comes from effects that are subleading in our analytic approach. We then compare our approach to other quark-gluon discriminants in the literature. Finally, we study the resilience of our quark-gluon discriminants against the
details of the event sample and observe that the analytic and machine-learning approaches show similar behaviour.
Reference:
[1] F. Dreyer, G. Soyez, A. Takacs, arXiv:2112.09140
The identification of the origin of hadronic jets is a key aspect in particle physics at hadron colliders. In this talk I will discuss the separation of hadronic jets that contain bottom quarks (b-jets) from jets featuring only light partons using a newly developed approach reported in arXiv:2202.05082 [hep-ph].
This approach exploits QCD-inspired jet substructure observables, such as one-dimensional jet angularities and the two-dimensional primary Lund plane, as inputs to modern machine-learning algorithms to efficiently separate b-jets from light ones. In order to test our tagging procedure, we consider simulated events where a Z boson is produced is association with jets and show that using jet angularities as an input for a deep neural network, as well as using images obtained from the primary Lund jet plane as input to a convolutional neural network, one can achieve tagging accuracy comparable with the accuracy of track-based taggers used by the LHC experiments. We argue that the complementary usage of the track-based taggers together with the ones based upon QCD-inspired observables could improve b-tagging accuracy.
Monte Carlo event generators, including their core parton-shower component, are crucial for a wide range of physics applications at colliders. However, the current โleading logarithmicโ (LL) accuracy of parton showers is increasingly becoming a limiting factor in precision applications. This talk presents new โPanScalesโ dipole showers for hadron collisions, focusing on the physical characteristics of the showers that are required to achieve NLL accuracy. I will then demonstrate that the implementations of the new showers reproduce NLL accuracy as expected, with explicit comparisons to all-order resummation results across a wide range of observables in Drell-Yan and gluon-fusion Higgs production, including the Drell-Yan and Higgs boson transverse momentum distributions, and jet veto acceptances.
A parton shower model is presented that is explainable and physics aware, and trainable solely based on the energy-momentum vectors of final state particles [1]. We show that it is possible to use such a white box AI approach to train a generativeโadversarial network (GAN) from a DGLAP-based parton shower Monte Carlo, where the inferred mechanisms can be fully understood by a human physicist. For the first time, we demonstrate how the resulting network not only reproduces the final distribution of particles, but is also able to deduce the underlying branching mechanism, including the AltarelliโParisi splitting function, the ordering of the shower, and the scaling behavior. While our proof-of-concept is focused on the perturbative physics of the parton shower, we see broad applicability of this approach to investigate areas of the QCD that are difficult to address from first principles. This includes nonperturbative and collective effects, factorization breaking and modification of the parton shower in heavy-ion settings, and electronโnucleus collisions.
[1] Y. Lai, D. Neill, M. Pลoskoล, F. Ringer, arXiv:2012.06582 [hep-ph], accepted by Phys. Lett. B
Higher-order splitting kernels comprise an essential ingredient for enhancing the logarithmic accuracy of parton showers. Beyond NLL, collinear dynamics of quark and gluon splitting at NLO is encoded in the triple-collinear splitting functions. This talk provides latest insights into various ingredients that enter the construction of higher-order parton showers. First, I will show that suitable integrals of the splitting functions, plus virtual corrections, furnishes a solid understanding of the scale of the coupling beyond the soft limit (CMW scheme). Second, I will establish a relationship between the splitting functions and the familiar NLO DGLAP kernels. Third, I will discuss the construction of a differential version of the coefficient $B_2$, which controls the next-to-next-to-leading logarithm in the hard-collinear limit. I will show the general structure of the coefficient $B_2$ and how it arises in QCD-based approaches of resummation.
The top-quark mass is one of the key fundamental parameters of the Standard Model that must be determined experimentally. Its value has an important effect on many precision measurements and tests of the Standard Model. The Tevatron and LHC experiments have developed an extensive program to determine the top quark mass using a variety of methods. In this contribution, the top quark mass measurements by the ATLAS experiment are reviewed. These include measurements in two broad categories, the direct measurements, where the mass is determined from a comparison with Monte Carlo templates, and determinations that compare differential cross-section measurements to first-principle calculations. Individual measurements in both categories have yielded mass measurements with sub-GeV precision, and combined results including several measurements approach the 500 MeV mark. The most recent measurements are presented, as well as a new study on the interpretation of the MC mass parameter and the deployment of a new MC model.
In this talk, we will present NNLO QCD predictions for several differential distributions of B-hadrons in top-pair events at the LHC. In an extension of previous work, the decay of the produced B-hadron to a muon or a J/$\psi$ meson has been incorporated, allowing us to make predictions for distributions involving those decay products as well. Additionally, a new set of B-hadron fragmentation functions has been obtained, which features reduced uncertainties and can be used consistently within the perturbative fragmentation function formalism employed in our calculation. Among other things, our predictions allow for a precise determination of the top-quark mass. The results also offer positive prospects for extracting heavy-flavour fragmentation functions from LHC data.
Studies of top quark properties using data collected by the CMS experiment are presented, including direct measurements of properties or extractions using differential cross section measurements. The latest results on top mass using multiple kinematic distributions in a likelihood technique as well as the top quark pole mass derived from tt+jet cross section will be discussed.
Due to its high mass top quarks decay before top-flavoured hadrons can be formed. This feature yields experimental access to the top quark polarization and production asymmetries. The large top quark sample moreover enables measurements of other properties, such as the W-boson branching ratios and helicity, and fragmentation functions of the bottom quarks. In this contribution, recent measurements of top quark properties are presented, including in particular a first analysis of the energy asymmetry in ttbar production and a measurement of top-quark polarization in single-top quark production.
The LHC has unlocked a previously unexplored energy regime. Dedicated techniques have been developed to reconstruct and identify boosted top quarks. Measurements in boosted top quark production test the Standard Model in a region with a strongly enhanced sensitivity to high-scale new phenomena. In this contribution, several new measurements of the ATLAS experiment are presented of the differential cross section and asymmetries in this extreme kinematic regime. The measurements are based on the complete 140/fb run-2 data set of proton-proton collisions at 13 TeV collected in run 2 of the LHC. The measurements are interpreted within the Standard Model Effective Field Theory, yielding stringent bounds on the Wilson coefficients of two-light-quark-two-quark operator.
Due to its genuine relativistic behavior, exotic character of interactions and symmetries, and fundamental nature, high-energy colliders are attractive systems for the experimental study of fundamental aspects of quantum mechanics. We propose the detection of entanglement between the spins of top-antitop-quark pairs at the LHC, representing the first proposal of entanglement detection in a pair of quarks, and also the entanglement observation at the highest energy scale so far. We show that entanglement can be observed by direct measurement of the angular separation between the leptons arising from the decay of the top-antitop pair. The detection can be achieved with high statistical significance, using the current data recorded during Run 2 at the LHC. In addition, we develop a simple protocol for the quantum tomography of the top-antitop pair. This experimental reconstruction of the quantum state provides a new experimental tool to test theoretical predictions of New Physics beyond the Standard Model. Our work explicitly implements canonical experimental techniques in quantum information in a two-qubit high-energy system, paving the way to use high-energy colliders to also study quantum information theory.
Quantum information observables, such as entanglement measures, provide a powerful way to characterize the properties of quantum states. In this talk, I propose to use them to probe the structure of fundamental interactions and to search for new physics at high energy.
Inspired by recent proposals to measure entanglement of top quark pairs produced at the LHC, I examine how higher-dimensional operators in the framework of the SMEFT modify the Standard Model expectations. The focus is put on two regions of interest in the phase space where the Standard Model produces maximally entangled states: at threshold and in the high-energy limit. A non-trivial pattern of effects is unveiled and in general, it is found that higher-dimensional effects lower the entanglement predicted in the Standard Model.
Several physics scenarios beyond the Standard Model predict the existence of new particles that can subsequently decay into a pair of Higgs bosons. This talk summarises ATLAS searches for resonant HH production with LHC Run 2 data. Several final states are considered, arising from various combinations of Higgs boson decays.
Many new physics models predict the existence of new particles decaying into scalar or vector bosons making these important signatures in the search for new physics. Searches for such resonances have been performed in final states with different numbers of leptons. This talk summarises ATLAS searches for diboson resonances with LHC Run 2 data in fully- and semi-leptonic final states.
Other comments
A summary of searches for heavy resonances with masses exceeding 1 TeV decaying into pairs or triplets of bosons is presented, performed on data produced by LHC pp collisions at $\sqrt{s}$ = 13 TeV and collected with the CMS detector during 2016, 2017, and 2018. The common feature of these analyses is the boosted topology, namely the decay products of the considered bosons (both electroweak W, Z bosons and the Higgs boson) are expected to be highly energetic and close in angle, leading to a non-trivial identification of the quarks and leptons in the final state. The exploitation of jet substructure techniques allows to increase the sensitivity of the searches where at least one boson decays hadronically. Various background estimation techniques are adopted, based on data-MC hybrid approaches or relying only in control regions in data. Results are interpreted in the context of the Warped Extra Dimension and Heavy Vector Triplet theoretical models, two possible scenarios beyond the standard model.
We have analyzed the ATLAS sample of 4-lepton
events, in the region of invariant mass 620$\div$740 GeV. We argue
that, from these data, one can obtain a clear signal for the
existence of a new scalar resonance. Looking for its possible
interpretation, we have compared with the hypothetical second
resonance of the Higgs field that has been recently proposed
and which would couple to longitudinal W's with the same typical
strength of the low-mass state at $125$ GeV. In fact, on the one hand,
the observed mass $(M_H)^{\rm exp}=660\div 680$ GeV would fit well with the
theoretical range $(M_H)^{\rm theor} = 690 \pm 10 ~({\rm stat}) \pm
20 ~({\rm sys})~ {\rm GeV}$.
On the other hand, the ATLAS data reproduce to high accuracy
the expected correlation between resonating peak cross section
$\sigma_R(pp\to H \to 4l)$ and the ratio $\gamma_H=\Gamma_H/M_H$
which should mainly be determined by the lower mass $m_h=$ 125 GeV.
This supports the idea that $m_h$ and the new $(M_H)^{\rm exp}$
could really represent two different excitations of the same Higgs field.
The analogous, CMS available results will also be discussed.
Many extensions to the Standard Model predict new particles decaying into two bosons (W, Z, photon, or Higgs bosons) making these important signatures in the search for new physics. Searches for such diboson resonances have been performed in different final states and novel analysis techniques, including unsupervised learning, are also used to extract new features from the data. This talk summarises such recent ATLAS searches with Run 2 data collected at the LHC and explains the experimental methods used, including vector- and Higgs-boson-tagging techniques.
In various models beyond the standard model, the Higgs sector is extended, and some new scalar bosons are introduced. One of the interesting candidates is the doubly charged scalar boson from the isospin doublet with Y=3/2. It is often introduced in models for the radiative generation of the neutrino mass. However, its phenomenology had not been fully investigated. We have investigated how to probe them at the future HL-LHC. We have found that it would be possible to observe the signal of the doubly charged scalars by using appropriate kinematical cuts unless their masses are too large. In this talk, I will introduce the results of our analyses. This talk is based on K. Enomoto, S. Kanemura, K. Katayama, Phys. Rev. D104 (2021) 3, 035040.
In this work, we derive lower mass bounds on the $Z^\prime$ gauge boson based on the dilepton data from LHC with 13 TeV of center-of-mass energy, and forecast the sensitivity of the High-Luminosity-LHC with $L=3000 fb^{-1}$, the High-Energy LHC with $\sqrt{s}=27$~TeV, and also at the Future Circular Collider with $\sqrt{s}=100$~TeV. We take into account the presence of exotic and invisible decays of the $Z^\prime$ gauge boson to find a more conservative and robust limit, different from previous studies. We investigate the impact of these new decay channels for several benchmark models in the scope of two different 3-3-1 models. We found that in the most constraining cases, LHC with $139fb^{-1}$ can impose $m_{Z^{\prime}}>4$~TeV. Moreover, we forecast HL-LHC, HE-LHC, and FCC-hh collider reach, and derived the projected bounds $m_{Z^{\prime}}>5.8$~ TeV, $m_{Z^{\prime}}>9.9$~TeV, and $m_{Z^{\prime}}> 27$~TeV, respectively. Lastly, we put our findings into perspective with dark matter searches to show the region of parameter space where a dark matter candidate with the right relic density is possible.
Several recent advancements in ROOT's analysis interfaces enable the development
of high-performance, highly parallel analyses in C++ and Python -- without requiring expert knowledge of multi-thread parallelization or ROOT I/O.
ROOT's RDataFrame is a modern interface for data processing that provides a natural entry point to many of these advancements. Power users can extend existing functionality while remaining decoupled from most of the underlying complexity thanks to carefully designed customization points.
This contribution presents the latest improvements in performance and ergonomics of modern ROOT analysis interfaces, show how real-world analyses and frameworks make use of these features and provide a glimpse of what is to come in the future. Topics will include interoperability of C++ and Python code, scaling up execution from a laptop to large computing clusters with minimal code changes, machine learning inference and user-friendly handling of systematic variations.
Deep neural networks are rapidly gaining popularity in physics research. While python-based deep learning frameworks for training models in GPU environments develop and mature, a good solution that allows easy integration of inference of trained models into conventional C++ and CPU-based scientific computing workflow seems lacking.
We report the latest development in ROOT/TMVA that aims to address this problem. This new framework takes externally trained deep learning models in ONNX format or Keras and PyTorch native formats, and emits C++ code that can be easily included and invoked for fast inference of the model, with minimal dependency on linear algebra libraries only. We provide an overview of this current solution for conducting inference in C++ production environment and discuss the technical details.
More importantly, we present the latest and significant updates of this framework in supporting commonly used deep learning architectures, such as convolutional and recurrent networks.
We show also how the inference code can be integrated in the analysis users code together with tools such as ROOT RDataFrame.
We demonstrate its current capabilities with benchmarks in evaluating popular deep learning models used by LHC experiments against popular deep learning inference tools, such as ONNXRuntime.
RooFit is a toolkit for statistical modeling and fitting, and together with RooStats it is used for measurements and statistical tests by most experiments in particle physics, particularly the LHC experiments. As the LHC program progresses, physics analysis becomes more computationally demanding. Therefore, the focus of RooFit developments in recent years was performance optimization.
Recently, much of RooFit's core functionality has been re-implemented to include several performance optimizations to speed up your physics analysis. New highlights are the model evaluation on GPUs or with CPU vector instructions, the parallelization of the gradient computation on multiple processes, and various optimizations targeting the workflows of major RooFit users. Additionally, the recent RooFit release includes new PyROOT-specific features that improve the interoperability with the scientific Python ecosystem. Besides, the latest release also supports JSON export and import of the RooWorkspace.
This talk will give an overview of these speed and usability improvements and explain how you can get possible speedups in your statistical analysis. Furthermore, this talk will report on ongoing RooFit developments. They primarily aim for the automatic differentiation of RooFit models and analytical gradients support in the minimization. All gradient-related efforts are planned to be generalized for second derivatives, which are used for uncertainty analysis and some minimization algorithms.
The HistFactory p.d.f. template is per-se independent of its implementation in ROOT and it is useful to be able to run statistical analysis outside of the ROOT, RooFit, RooStats framework. pyhf is a pure-Python implementation of that statistical model for multi-bin histogram-based analysis and its interval estimation is based on the asymptotic formulas of "Asymptotic formulae for likelihood-based tests of new physics" [arXiv:1007.1727]. pyhf supports modern computational graph libraries such as TensorFlow, PyTorch, and JAX in order to make use of features such as auto-differentiation and GPU acceleration. In addition, pyhf's JSON serialization specification for HistFactory models has been used to publish 18 full probability models from published ATLAS collaboration analyses to HEPData.
Event Generators simulate particle interactions using Monte Carlo processes providing the primary connection between experiment and theory in experimental high energy physics. These make up the first step in the simulation workflow of collider experiments, representing 10-20% of the annual WLCG usage for the ATLAS and CMS experiments. With computing architectures becoming more heterogeneous, it is important to ensure these key software frameworks can be run on future systems, large and small. Progress on advancing the Madgraph_aMC@NLO event generator to utilize hybrid architectures, i.e. CPU with accelerators, will be discussed. In this case, the leading-order code generation toolkit has been expanded to generate matrix element calculations using C++ vector instructions and in CUDA, Kokkos, Alpaka, and SYCL. Performance will be reported in terms of matrix element calculations per time on NVidia, Intel, and AMD devices.
Machine learning is a promising field to augment and potentially replace part of the event reconstruction of high energy physics experiments. This is partly due to the fact that many machine learning algorithms offer relatively easy portability to heterogeneous hardware, and thus could play an important role in controlling the computing budget of future experiments. In addition, the capability of machine learning based approaches to tackle non-linear problems can bring performance improvements. Particularly, the track reconstruction problem has been addressed in the past with several machine learning based attempts, largely facilitated by the two highly resonant machine learning challenges (TrackML). The Exa.TrkX project has developed a track finding pipeline based on graph neural networks that has shown good performance when being applied to the TrackML detector. We will present the technical integration of the Exa.TrkX pipeline into the framework of the ACTS (A Common Tracking Software) project. As far as we know, this is the first integration of an GNN pipeline in a production tracking framework. We will further show our efforts to apply the pipeline to the OpenDataDetector, a model of a more realistic detector that supersedes the TrackML detector. The tracking performance results in this setup will be compared with those of the ACTS standard track-finder, the Combinatorial Kalman Filter. Alongside this, we will present other developments in the context of building and optimizing a full chain example using Exa.TrkX in ACTS.
The determination of charged particle trajectories in collisions at the CERN Large Hadron Collider (LHC) is an important but challenging problem, especially in the high interaction density conditions expected during the future high-luminosity phase of the LHC (HL-LHC). Graph neural networks (GNNs) have been successfully applied to this problem by representing tracker hits as nodes in a graph, creating graph edges for possible track segments, classifying the edges are true or false, and applying clustering on the classified edges to produce track candidates. Building off our recent work developing GNNs for tracking, we present additional studies aimed at identifying the most accurate and efficient GNN tracking pipeline. In particular, we compare two different edge classifying GNNs, explore hyperparamter optimization of the models, evaluate the impact of different graph construction methods on overall track finding performance, and implement different track-fitting mechanisms for GNN-identified track candidates. We also compare the performance of these GNN models to current Kalman filter-based tracking methods.
The General Antiparticle Spectrometer (GAPS) is the first experiment optimized to identify low-energy (<0.25 GeV/n) cosmic antinuclei, in particular antideuterons from dark matter annihilation or decay. Using a novel detection approach which relies on exotic atom formation and decay, the GAPS program will deliver an unprecedented sensitivity to cosmic antideuterons, an essentially background-free signature of various dark matter models, as well as a high-statistics antiproton spectrum in an unexplored energy range and leading sensitivity to cosmic antihelium. GAPS is currently under integration and preparing for the first Antarctic balloon flight while two follow-up flights are planned.
In this contribution, we will present GAPS custom-developed instrument technology, including large-area silicon detectors and a large-acceptance time-of-flight system, as well as detailed simulation studies, while focusing on the anticipated scientific impact of the GAPS program on cosmic-ray searches for dark matter, and the path forward to the initial flight.
Space: the final frontier for antinuclei physics. There, antinucleosynthesis models already tested on the bench of hadronic colliders and particle physics experiments are put at work to crack one of the biggest problems of modern physics: the existence and nature of dark matter.
In fact, the observation of an antinucleus in cosmic rays would most probably mean a breakthrough in searches for Dark Matter. However, to correctly interpret future results, precise knowledge of both the antinuclei production mechanism and their nuclear inelastic cross sections is needed.
The ALICE collaboration already investigated in detail the anti nucleosynthesis models in small and large collision systems at the LHC and has recently performed several measurements of antideuteron, $^3\overline{\text{H}}$ and $^3\overline{\text{He}}$ inelastic cross sections, providing the first experimental information of this kind.
In this talk, the final results on antideuteron and $^3\overline{\text{He}}$ inelastic cross-sections and the new results on $^3\overline{\text{H}}$ inelastic cross-sections are discussed as well how, thanks to them, it is possible to determine for the first time the transparency of the Galaxy to antinuclei stemming from dark matter and Standard Model collisions.
I describe a reanalysis of data sets that have previously been found to harbor evidence for an unidentified X-ray line at 3.5 keV in order to quantify the robustness of earlier results that found significant evidence for a new X-ray line at this energy. The 3.5 keV line is intriguing in part because of possible connections to dark matter. We analyze observations from the XMM-Newton and Chandra telescopes. We investigate the robustness of the evidence for the 3.5 keV line to variations in the analysis framework and also to numerical error in the chi-square minimization process. For example, we consider narrowing the energy band for the analysis in order to minimize mismodeling effects. The results of our analyses indicate that many of the original 3.5 keV studies (i) did not have fully converged statistical analyses, and (ii) were subject to large systematic uncertainties from background mismodeling. Accounting for these issues we find no statistically significant evidence for a 3.5 keV line in any X-ray data set.
The Milky Way galactic center has been broadly explored looking for indirect dark matter (DM) signals. However, younger galaxies, such as Centaurus A, are expected to gather a much higher DM component due to the formation of a density spike which would have survived to date contrary to the case of our Galaxy.
In this talk, I will present indirect photon signatures of leptophilic DM coming from Centaurus A. I will consider a model where DM is a Majorana fermion which interacts with right-handed electrons via a scalar mediator. Particular stress is given to the possibility of detecting circular polarised signals from the interaction of DM with high energy electrons of the active galactic nuclei jet, finding that the degree of polarization can reach up to 90-100%. I will estimate the required sensitivity from Fermi-LAT to detect this signal and I will derive constraints based on the self-annihilation of our DM candidate. Notably, the bounds found on the average annihilation cross section are 7 orders of magnitude stronger than the ones from measurements of the Galactic Center.
Finally, since the origin of the photons in the GeV-TeV range from Centaurus A is not completely clear and an exotic origin is compatible with the observations, I will show that this excess could be explained with signals from DM annihilation.
Neutron stars harbour matter under extreme conditions, providing a unique testing ground for fundamental interactions.
Dark matter can be captured by neutron stars via scattering, where kinetic energy is transferred to the star.
This can have a number of observational consequences, such as theheating of old neutron stars to infra-red temperatures.
Previous treatments of the capture process have employed various approximation or simplifications.
We present here an improved treatment of dark matter capture, valid for a wide dark matter mass range, that correctly incorporates all relevant physical effects.
These include gravitational focusing, a fully relativistic scattering treatment, Pauli blocking, neutron star opacity and multi-scattering effects.
We provide general expressions that enable the exact capture rate to be calculated numerically, and derive simplified expressions that are valid for particular interaction types or mass regimes and that greatly increase the computational efficiency.
Our formalism is applicable to the scattering of dark matter from any neutron star constituents, or to the capture of dark matter in other compact objects.
We apply these results to scattering of dark matter from neutrons, protons, leptonic targets, as well as exotic Baryons.
For leptonic Targets, a relativistic description is essential. Regarding Baryons, we outline two important effects that are missing from most evaluations of the dark matter capture rate in neutron stars.
As dark matter scattering with nucleons in the star involves large momentum transfer, nucleon structure must be taken into account via a momentum dependence of the hadronic form factors.
In addition, due to the high density of neutron star matter, we should account for nucleon interactions rather than modeling the nucleons as an ideal Fermi gas.
Properly incorporating these effects is found to suppress the dark matter capture rate by up to three orders of magnitude.
We find that the potential neutron star sensitivity to DM-lepton scattering cross sections greatly exceeds electron-recoil experiments, particularly in the sub-GeV regime, with a sensitivity to sub-MeV DM well beyond the reach of future terrestrial experiments.
We present preliminary results for DM-Baryons scatterings in Neutron stars, were the sensitivity is expected to greatly exceed current DD experiments for the spin-dependent case in the whole masse range, and for spin-independent in the low and high mass range.
Regarding White Dwarfs, for dark matter-nucleon scattering, we find that white dwarfs can probe the sub-GeV mass range inaccessible to direct detection searches, with the low mass reach limited only by evaporation, and can be competitive with direct detection in the 1 GeV โ 10 TeV range.
White dwarf limits on dark matter-electron scattering are found to outperform current electron recoil experiments over the full mass range considered, and extend well beyond the โผ 10 GeV mass regime where the sensitivity of electron recoil experiments is reduced.
We discuss indirect searches for sub-GeV dark matter (DM) that annihilates directly to a neutrino pair or a pair of new bosons subsequently decaying to neutrinos. The neutrino spectrum from the DM annihilation is monochromatic in the former process and a polynomial shape in the latter case. As a benchmark scenario, we consider a gauged U(1)$_{L_\mu-L_\tau}$ model under which a DM field is charged, and evaluate the sensitivity at the Super-Kamiokande and future Hyper-Kamiokande experiments. We also discuss the interplay between the muon g-2 anomaly and DM physics.
The next generation of collider detectors will make full use of Particle Flow Algorithms, requiring high precision tracking and full imaging calorimeters. The latter, thanks to granularity improvements by two to three orders of magnitude compared to existing devices, have been developed during the past 15 years by the CALICE collaboration and are now reaching maturity. The state-of-the-art and the remaining challenges will be presented for all readout types investigated by CALICE: silicon diode and scintillator for electromagnetic calorimetry and gaseous, semi-digital readout and scintillator with SiPM readout for a hadronic calorimetry. We will describe the commissioning, including beam test results, of large-scale technological prototypes and results on energy resolution, linearity, and pattern recognition. New results obtained from 2021 and 2022 beam tests with a 44,000-readout cell technological prototype of a standalone highly granular silicon tungsten electromagnetic calorimeter and combined with the CALICE analogue hadron calorimeter (SiPM on tile) will be featured.
Prototypes of electromagnetic and hadronic imaging calorimeters developed and operated by the CALICE collaboration provide an unprecedented wealth of highly granular data of hadronic showers for a variety of active sensor elements and different absorber materials. We will discuss detailed measurements of the spatial and the time structure of hadronic showers to characterize the different stages of hadronic cascades in the calorimeters, which are then confronted with GEANT4-based simulations using a variety of hadronic physics models. These studies are performed on the two absorber materials, steel and tungsten, used in the prototypes. The high granularity of the detectors is exploited in the reconstruction of hadronic energy, both in individual detectors and combined electromagnetic and hadronic systems, making use of software compensation and semi-digital energy reconstruction. The results include new simulation studies that predict the reliable operation of granular calorimeters.
We will report on the performance of these reconstruction techniques for different electromagnetic and hadronic calorimeters, with silicon, scintillator and gaseous active elements.
The highly granular imaging calorimeters developed and operated by the CALICE collaboration provide a fertile testing ground for the application of innovative simulation and reconstruction techniques. Firstly, we show how granularity and the application of multivariate analysis algorithms enable the separation of close-by particles, and ParticleID. Secondly, we will outline how Machine Learning techniques are applied either to CALICE data to highlight shower structure quantitively, or to CALICE simulation framework for the generation of events, or to both to generate original โ e.g. hardly measurable โ samples from existing ones.
Based on the particle-flow paradigm, a novel hadronic calorimeter (HCAL) with highly granular scintillating glass tiles is proposed to address major challenges from precision measurements of jets at future lepton collider experiments, such as the Circular Electron Positron Collider (CEPC). Compared with the plastic scintillator option, the scintillating glass HCAL design aims for further significant improvements of the hadronic energy resolution as well as the particle-flow performance, especially in the low energy region (typically below 10 GeV for major jet components), with a notable increase of the energy sampling fraction due to its high density.
A Geant4 full simulation model has been established to study the hadronic energy resolution of single hadrons and the impacts of key parameters (e.g. density, doping, intrinsic light yield, energy threshold, etc.) of scintillating glass. Physics benchmark potentials with jets in the final state are also being evaluated using a Particle-Flow Algorithm (PFA), named "ArborPFA".
On the other hand, developments of new scintillating glass materials are ongoing since 2021 within a collaboration of research institutions and companies in China. The goals of the scintillating glass focus on the high light yield, high density, good transparency to scintillation light and cost-effectiveness. First batches of small-scale glass samples have been produced, followed by comprehensive characterisations using dedicated experimental setups to extract key properties (e.g. intrinsic light yield, emission and transmission spectra, scintillation decay times, etc.), which would provide crucial inputs to the HCAL design and optimisations.
For the highly granular HCAL with scintillating glass tiles, highlights of the expected detector performance with single hadrons and jets will be presented in the contribution. In addition, latest developments of scintillating glass and the measurements will also be included.
This contribution will present a resource-efficient FPGA-based neural network regression model which was developed for potential applications in the future hardware muon trigger system of the ATLAS experiment at the Large Hadron Collider (LHC). Our model uses a neural network regression to significantly improve the rejection of the dominant source of background events in the central detector region, due to muon candidates with low transverse momenta. Effective real-time selection of muon candidates is the cornerstone of the ATLAS physics programme. The entirely new FPGA-based hardware muon trigger system will be installed in 2025-2026 that will process full muon detector data within a 10 microsecond latency. The large FPGA devices planned for this upgrade will have sufficient resources to allow deployment of machine learning methods for improving identification of muon candidates and for searching for new exotic particles. We developed a resource efficient implementation of the neural network regression model using the FPGA hardware description language. Our implementation was carefully optimised in order to minimise neural network latency and FPGA resource usage. This contribution will present simulation results of the network performance using a simplified detector model. Details of FPGA hardware implementation, optimisation and performance will also be presented. The simulated network latency and deadtime are well within the requirements of the future ATLAS muon trigger system, therefore opening a possibility for deploying machine learning methods for data taking by the ATLAS experiment at the High Luminosity LHC.
Trigger strategies for future high-rate collider experiments invariably envisage implementations of Neural Networks on FPGAs. For the HL-LHC case, as well as for FCC-ee and ILC, triggerless approaches are explored where the event selection will be largely committed to Machine Learning models directly interfaced with detectorโs front-end readout. Even for the huge amounts of data produced at the FCC-hh (O(TBytes/s) expected), Machine Learning or Artificial Intelligence algorithms will play an important role to make intelligent decisions as close to the detector as possible and to provide at least O(10) data reduction factors after front-end readout.
Fulfilling the requirement of 1$\ \mathrm{\mu s}$ latency maintaining the size of the TDAQ system affordable in terms of procurement and maintenance costs turns out to be extremely difficult. It is unanimously considered one of the most difficult challenges for the next generation detectors at colliders.
In this respect, resource optimization is an indispensable part of the design of TDAQ algorithms, where models must be suitably compressed, quantized, and parallelized before implementation on ASICs and FPGAs. Reshaping and compression are the first and most critical steps to take towards Neural Network optimization. The strategy is often based on sampling the hyperparameter space with an iterative procedure or on a grid search; in both cases, it is not unlikely to get sub-optimal implementations.
Is it possible to optimally tune neural networks under constraints of latency and size?
Is iterative pruning the only option? Does there exist a mathematically robust method to choose the best network architecture among all the (infinite) ones compatible with the FPGA resources available?
In the last two years, the authors have developed and tested a novel strategy to prune Deep Neural Networks. The method is based on overlying a shadow network on the one to be optimized. During the training, the combined optimization of the shadow and standard networks highlights the optimal network layout for the specific task, expressed by the loss function and the available data. This system proves to be effective for real-time inference applications for trigger purposes.
With this approach, significant input features are selected while irrelevant nodes are pruned, reducing the overall networkโs size to dimensions ultimately decided by the designer. The pruning process is controlled down to a mean error of $\pm1$ on the number of active nodes and it can be applied to the entire network or only to a portion of it. The tool is straightforward to integrate into already-developed Deep Fully Connected Neural Network classifiers, allowing to reduce the amount of input data and network size without significant performance losses.
Herein we present the tool for the first time. To show its potential for trigger purposes, we constructed a simulated dataset, using Pythia and Delphes to emulate high-level trigger observables. The simulation was used to benchmark the pruner performance on a DNN tagger for the identification of jets containing $b$ quarks arising from Higgs boson decays. As an example of unconstrained application, we will show how to easily design the best-performing Fully Connected Neural Network tagger. Concerning constrained applications, we will show how we obtained equal-performance pruned networks using down to 50$\%$ of the available nodes, and achieved performance gains of up to 25$\%$ with models up to 30$\%$ lighter.
The CMS collaboration, one of the largest collaborations in high-energy physics, formed a Diversity Office (DO) under a mandate from its collaboration board in 2017. We present here the efforts of the CMS DO in fulfilling its mandate to improve diversity and inclusion (D&I) within the CMS Collaboration and foster an environment where all CMS members can thrive. These efforts include implementing a code of conduct, raising awareness about D&I matters within the collaboration, and facilitating outreach and communication outside of the collaboration about D&I.
How to foster gender equity in academia and in the research? Which gender equity practices could be able to counter the many gender inequality ones? Many measures are focused on the women, trying to increase the number of women at all career levels. In this framework, known as โfix the womenโ, the measures work from the equal opportunities side and help women to adjust to the male world. Among these practices, mentoring programs are quite diffused for enlarging womenโs ambitions and making them visible for career progressions. These programs meet the organization needs without disrupting the gendered status quo. The masculine model of the ideal academic remains unquestioned. More women enter in the institutions, going also to top positions, but only when they conform to existing image of the ideal scientist, and this is especially true for the Physics cultural model. Even if important, these measures cannot be implemented alone. Gender transformative mentoring programs work both on mentees and mentors with the idea of raising awareness, especially among mentors, about the persisting gendered dimension of academia and research. These programs work on two lines: empower the individual and at the same time generate transformative process inside the institutions. In this process the role of mentors is crucial.
In 2018 we decided to start an INFN gender mentoring program with the intention of operating a transformative process within the organization starting from the younger generations (mentees) and their mentors. The program, inserted in the national INFN training plan for young researchers and fellows and senior researchers, has been the first gender mentoring inside an Italian research institute. A transformative program requires a tailored training specially for mentors not only on the significance of gender issues, but also to help mentors developing a broader understanding of what mentoring is all about. For the second edition we included some men in both cohorts because whatever structural change we speak of, this cannot fail to include the male component both among the โmenteesโ and the โmentorsโ. Each program lasted roughly one year with a fixed number of meetings one-to-one, several focus groups and training sessions. In order to foster an institutional change and better exploit the mentoring potential, during the 2020-2021 edition we worked, in a participatory approach, with mentee and mentors to bring concrete proposals to the management table for counteracting the multitude of gender inequality practices.
The mentoring model implemented inside INFN was designed, including some tools, by University of Naples โFederico IIโ researchers following an evaluation study conducted during the mentoring project inside the European GENOVATE project. Together with us, coordination group, the program has been adapted to our institute considering the INFN specific needs and organization.
Positive aspects and difficulties of the program will be discussed.
Panel discussion with all the speaker
We review recent CMS results on collective flow in heavy ion collisions.
This talk presents ATLAS measurements of collective, flow phenomena in a variety of collision systems, including pp collisions at 13 TeV, Xe+Xe collisions at 5.44 TeV, and Pb+Pb collisions at 5.02 TeV. These include measurements of vn-[pT] correlations in Xe+Xe and Pb+Pb, which carry important information about the initial-state geometry of the Quark-Gluon Plasma and can potentially shed light on any quadrupole deformation in the Xe nucleus; measurements of flow decorrelations differential in rapidity, which probe the longitudinal structure of the colliding system; and measurements of the sensitivity of collective behavior in pp collisions to the presence of jets, which seek to distinguish the role that semi-hard processes play in the origin of these phenomena in small systems. These measurements furthermore provide stringent tests of the theoretical understanding of the initial state in heavy ion collisions.
The study of collective phenomena in ultrarelativistic heavy-ion collisions is nowadays to a great extent built on the so-called flow amplitudes $v_n$ and symmetry planes $\Psi_n$. Both appear as two distinct degrees of freedom in the parametrization of the azimuthal distribution of the produced particles, which is used in the study of the quark-gluon plasma (QGP). Investigating the complex interplay of
these quantities allows one to further constrain our current knowledge of this exotic state of matter. While analyses techniques for flow amplitudes $v_n$ have advanced over the past years, observables used for measuring symmetry planes $\Psi_n$ are often plagued by built-in biases. The most important of them arises from the neglect of the correlations between different flow amplitudes, which were shown by the ALICE Collaboration to exist even between three amplitudes. Recent developments for the
measurement of symmetry plane correlations (SPC) take these correlations into account and provide a new and more precise analysis technique - the so-called Gaussian Estimator (GE).
In this talk, we highlight the new results for higher-order multiharmonic flow fluctuations obtained with ALICE in heavy-ion collisions. These results show the presence of complex correlations between multiple flow amplitudes of different order, and also emphasize their importance in the measurement of SPC. Taking this into account, the first experimental results of SPC measured with the newly developed GE using Pb-Pb collisions data are presented. All results are compared to theoretical predictions for the initial coordinate space provided by the T$_{\rm R}$ENTo model and for the momentum space obtained with the state-of-the-art model iEBE-VISHNU.
With a unique geometry covering the forward rapidity region, the LHCb detector provides unprecedented kinematic coverage at low Bjorken-$x$ down to $x \sim 10^{-5}$ or lower. The excellent momentum resolution, vertex reconstruction and particle identification allow precision measurements down to very low hadron transverse momentum. In this contribution we present the latest studies of the relatively unknown low-$x$ region using the LHCb detector, including recent measurements of charged and neutral hadron production. Furthermore, LHCb has studied charged hadron correlations in the forward pseudorapidity coverage. This talk will also include details of correlation analyses of flow harmonics in $p$Pb and PbPb collisions, and Bose-Einstein Correlations in $pp$ and $p$Pb collisions.
Femtoscopy is a tool that can be used to measure the space-time dimensions of the particle-emitting source created in heavy-ion collisions using two-particle correlations. Additionally to the measurement of the system size, one can extract the average pair-emission asymmetry between two particles with different masses. In this context, the measurement of femtoscopic correlations between charged pion and kaon pairs for different charge combinations obtained in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV with ALICE at the LHC is presented. The spherical harmonics
representations of the correlation functions ($C^0_0$ and $\Re C^1_1$) have been studied in different centrality bins. The obtained correlation functions are analysed after taking into account a precise treatment of the non-femtoscopic background. The extracted source size ($R$) and the pair emission asymmetry ($\mu$) show an increase from peripheral to central events. Moreover, it is observed that pions are emitted closer to the centre of the particle-emitting system than kaons and this result is associated to the hydrodynamic evolution of the source. Also, the source radii are found to be decreasing with increasing average momentum ($k_{\rm T}$) and transverse mass ($m_{\rm T}$) of the pair which indicates the presence of strong radial flow in the system.
The study of femtoscopic correlations in high-energy collisions is a powerful tool to investigate the space-time structure of the particle emitting region formed in such collisions, as well as to probe interactions that the involved particles may suffer after being emitted. This talk presents an overview of recent results on the two-particle femtoscopic correlations measurements using charged particles and identified hadrons in pp, pPb, and PbPb collisions at LHC energies. In general, the femtoscopic parameters are obtained assuming a Gaussian or an exponential shape to describe the emitting source distribution. In some cases, however, the generalized Gaussian, i.e., the symmetric alpha-stable L\'evy distribution, is favored to describe the source. Some of the measurements allow to extract the parameters of the strong interaction felt by hadrons using their femtoscopic correlations. The studies are performed in a wide range of the pair average transverse momentum (or average transverse mass) and charged particle multiplicities. In addition, prospects for future physics results using the CMS experiment are also discussed.
I will report on the developments in precision calculations for Higgs physics from its discovery to today, focusing on the latest advancements in the study of its main production channel, gluon fusion.
It has been 10 years since the discovery of the Higgs Boson. A small โbumpโ around 125 GeV mass spectrum has now evolved into precision measurements of the properties of Higgs Boson. As the second largest Higgs production channel at the LHC, vector boson fusion to Higgs has been studied intensively by both experiment and theory communities. In this talk, I will give a brief overview of the established measurements of Higgs Bosons produced via vector boson fusion. Then I will dive into the differential signatures of the process and introduce the state-of-the-art precisions achieved from theory predictions. An outlook about what to expect for future progress in precision phenomenology will be at the end of the talk.
One of the most interesting yet-to-be answered questions in Particle Physics is the nature of the Higgs Yukawa couplings and their universality. Key information in our understanding of this question arises from studying the coupling of the Higgs boson to second generation quarks. Some puzzles in the flavor sector and potential additional sources of CP violation could also have their origins in an extended Higgs sector.
Rare Higgs decay modes to charm or strange quarks are very challenging or nearly impossible to detect with the current experiments at the Large Hadron Collider, where the large multi-jet backgrounds inhibits the study of light quark couplings with inclusive H->qqbar decays. Future e+e- machines are thus the perfect avenue to pursue this research.
Studies were initiated in the context of Snowmass2021 (https://arxiv.org/abs/2203.07535) with particular emphasis on the Higgs coupling to strange quarks and the related flavour tagging challenges.
This gave light to the development of a novel algorithm for tagging jets originating from the hadronisation of strange quarks (strange-tagging) and the first application of such a strange-tagger to a direct Higgs to strange (h->ssbar) analysis.
The analysis is performed with the future International Large Detector (ILD) at the International Linear Collider (ILC), but it is easily applicable to other Higgs factories. The $P(e^-,e^+) = (-80\%,+30\%)$ polarisation scenario was used for this preliminary result, corresponding to \unit[900]{\ifb} of the initial proposed \unit[2000]{\ifb} of data which will be collected by ILD during its first 10~years of data taking at \sqrts = \unit[250]{GeV}. The study includes as well a preliminary investigation of a Ring Imaging Cerenkov system (RICH) capable of maximising strange-tagging performance in future Higgs factory detectors.
The detailed study of the Higgs boson is one of the main tasks of contemporary particle physics. Gluon fusion, the main production channel of Higgs bosons at the LHC, has been successfully modelled in QCD up to $\text{N}^3\text{LO}$. To fully exploit this unprecedent theoretical effort, sub-leading contributions, such as electroweak corrections, must be investigated. I will present the analytic calculations of the gluon- and quark-induced Higgs plus jet amplitudes in mixed QCD-electroweak corrections mediated by light quarks up to order $v \alpha^2 \alpha_S^{3/2}$.
In Gildener-Weinberg multi-Higgs models of electroweak symmetry breaking,
the new, Beyond-Standard-Model Higgs bosons are surprisingly light,
$< 600$--$700\,{\rm GeV}$, well within reach of the LHC. There is a
surprising connection between the top quark and Higgs alignment in these
models. Were it not for the top quark and its large mass, the coupling of
the $125\,{\rm GeV}$ Higgs boson $H$ to gauge bosons and fermions would be
indistinguishable from those of the Standard Model Higgs. The top quark's
coupling to a single Higgs doublet breaks this perfect alignment in higher
orders of the Coleman-Weinberg loop expansion of the effective
potential. But the effect is still small, $< \cal{O}(\rm{1\%})$, and probably
experimentally inaccessible. The experimental consequence of this is that
many popular LHC searches for Beyond-Standard-Model Higgs bosons are
fruitless --- and they will remain so.
Neutrino nucleus elastic scattering ($\nu A_{el}$) is an electroweak interaction of the Standard Model of particle physics. We formulate a quantitative and universal parametrization of the quantum mechanical coherency effects in $\nu A_{el}$ [1], under which the experimentally accessible misalignment phase angle between nonidentical nucleonic scattering centers can be studied. We relate it to the conventional description of nuclear many-body physics through form factor and data-driven cross section reduction fraction [2]. Limits on the latest CsI and LAr data from COHERENT collaboration along with prospects of observing the $\nu A_{el}$ process at the Kuo-Sheng Reactor Neutrino Laboratory with Germanium detectors with ${O}$(100 eV) threshold will also be presented.
[1] โCoherency in neutrino-nucleus elastic scatteringโ, S. Kerman et al., TEXONO Collaboration, Phys. Rev. D 93, 113006 (2016).
[2] โStudies of quantum-mechanical coherency effects in neutrino-nucleus elastic scatteringโ, V. Sharma et al., TEXONO Collaboration, Phys. Rev. D 103, 092002 (2021).
The CONUS experiment (COherent elastic NeUtrino nucleus Scattering) aims to detect coherent elastic neutrino-nucleus scattering (CEฮฝNS) of reactor antineutrinos on germanium nuclei in the fully coherent regime. The CONUS experiment โ operational since April 2018 โ is located at a distance of 17m from the 3.9 GWth core of the Brokdorf nuclear power plant (Germany). The possible CEvNS signature is measured by four 1 kg point-contact high-purity germanium (HPGe) detectors, which provide a sub keV energy threshold with background rates in the order of 10 events per kg, day and keV.
The analysis of the first CONUS data set allows to establish the current best limit on CEฮฝNS from a nuclear reactor with a germanium target. Moreover, competitive limits on neutrino physics beyond the standard model can be set such as on non-standard neutrino interactions or on the neutrino electromagnetic properties. These results together with the upgrades and the analysis status of the current run will be presented in this talk
Coherent elastic neutrino nucleus scattering (CEvNS) is a well-predicted Standard Model process only recently observed for the first time. Its precise study could reveal non-standard neutrino properties and open a window to search for physics beyond the Standard Model.
NUCLEUS is a CEvNS experiment conceived for the detection of neutrinos from nuclear reactors with unprecedented precision at recoil energies below 100 eV. Thanks to the large cross-section of CEvNS, an extremely sensitive cryogenic target of 10g of CaWO4 and Al2O3 crystals is sufficient to provide a detectable neutrino interaction rate.
The NUCLEUS experiment will be installed between the two 4.25 GW reactor cores of the Chooz-B nuclear power plant in France, which provide an anti-neutrino flux of 1.7 x 10^12 v/(s cm2). At present, the experiment is under construction. The commissioning of the full apparatus is scheduled for 2022 at the Underground Laboratory of the Technical University Munich, in preparation for the move to the reactor site.
The scintillating bubble chamber is a new technology under development ideal for both GeV-mass WIMP searches and coherent elastic neutrino-nucleus scattering (CE$\nu$NS) detection at reactor sites. A 10-kg bubble chamber using liquid argon with the potential to reach and maintain sub-keV energy thresholds is currently under construction. This detector will combine the event-by-event energy resolution of a liquid noble scintillation detector with the world-leading electron-recoil discrimination capability of the bubble chamber. The CE$\nu$NS physics program of this detector will be presented in this talk, including the sensitivity to the weak mixing angle, neutrino magnetic moment, and a light Z' gauge boson mediator, in addition to other sensitivity to New Physics scenarios such as light scalar mediators, sterile neutrino oscillations, unitarity violation, and non-standard interactions.
We examine the latest measurements coming from the COHERENT experiment within an EFT framework. To do so, we put forward a formalism which for the first time models correctly within the QFT characterization the interplay between production and detection. After discussing all details involved, we perform a complete phenomenological analysis for CE$\nu$NS data measured on Argon and Cesium-Iodium nuclei considering as observables not only the total number of events but also the recoil energy distributions.
In the presence of transition magnetic moments between active and sterile neutrinos, Coherent Elastic Neutrino Nucleus Scattering (CE$\nu$NS) experiments can provide stringent constraints on the neutrino magnetic moment by searching for Primakoff upscattering. I will introduce a new smoking gun signal, a radiative upscattering process with a photon emitted in the final state, which will be able to probe neutrino transition magnetic moments beyond existing limits. Most importantly, I will highlight that such a new experimental mode has the potential to distinguish between the Majorana and Dirac nature of light active neutrinos.
We discuss a new experiment based on the proposal [1] to observe for the first time the coherent elastic neutrino-atom scattering (CEฮฝAS), using electron antineutrinos from tritium decay and a liquid He-4 target, and also to search neutrino electromagnetic properties [2,3], including the neutrino magnetic moment. The experiment is under preparation within the research program of the National Centre for Physics and Mathematics (NCPM) and the Branch of Lomonosov Moscow State University in Sarov (Russia). In CEฮฝAS the neutrino scatters with the whole atom and the atomic electrons tend to screen the weak charge of the atomic nucleus as seen by the neutrino probe. With tritium neutrinos the interference between the He-4 nucleus and the electron cloud of the He atom produces a sharp dip in the recoil spectrum at atomic recoil energies of about 9 meV, reducing sizably the number of expected events with respect to the coherent elastic neutrino-nucleus scattering case. A low-background neutrino laboratory is being created at the NCPM with a record high-intensity tritium source of 10 MCi (1 kg) [4-6]. With the estimated sensitivity of this apparatus, it is possible to detect CEฮฝAS for the first time and also to observe or to set an upper limit on the electron neutrino magnetic moment ฮผฮฝ on the level of fewร10โ13ฮผB at 90% C.L., that is about two orders of magnitude smaller than the current experimental limits. If necessary, at the next stage of the proposed experiment, the intensity of the tritium source can be increased up to 40 MCi (4 kg).
References
[1] M. Cadeddu, F. Dordei, C. Giunti, K. Kouzakov, E. Picciau, and A. Studenikin, Phys. Rev. D 100, 073014 (2019) [arXiv:1907.03302 [hep-ph]].
[2] C. Giunti and A. Studenikin, Rev. Mod. Phys. 87, 531 (2015) [arXiv:1403.6344 [hep-ph]].
[3] K. Kouzakov and A. Studenikin, Phys. Rev. D 96, 099904 (2017) [arXiv:1703.00401 [hep-ph]].
[4] V.N. Trofimov, B.S. Neganov, A.A. Yukhimchuk. Physics of atomic nuclei. V.61, No.8 (1998) 1271-1273.
[5] B.S. Neganov, et al. Physics of atomic nuclei. V.64, No.11 (2001) 1948-1954.
[6] V.P. Martemyanov, et al. Fusion Science and Technology, V.67, N.2&3 (2015) 535-538.
Reliable modeling of quasielastic (QE) lepton scattering on nuclei is of great interest to neutrino oscillations experiments, especially at low values of the 3-momentum transfer $\bf \vec q$. We report on a phenomenological analysis of all available electron scattering data on carbon within the framework of the superscaling model (including Pauli blocking). In addition to the expected enhancement of the transverse QE response function ($R_T^{QE}$), we find that at low values of $\bf \vec q$ there is "Extra Suppression" of the QE longitudinal response function ($R_L^{QE}$) beyond the expected suppression from Pauli blocking. The total (combined Pauli plus Extra) suppression of $R_L^{QE}$ is larger than the minimum suppression predicted by the Coulomb Sum Rule. We extract $\bf |\vec q|$ dependent parameterizations that can be used to determine the $R_L^{QE}$ "Extra Suppression" factor for any nucleon momentum distribution. We also provide parameterizations of the form factors for the excitation of nuclear states (which are also needed for modeling electron scattering cross sections small $\bf q$).
The physics reach of the LHCb detector can be extended by reconstructing particles with a long lifetime that decay downstream of the dipole magnet, using only hits in the furthest tracker from the interaction point. This allows for electromagnetic dipole moment measurements, and increases the reach of beyond the Standard Model long-lived particle searches. However, using tracks to reconstruct particles decaying in this region is challenging, particularly due to the increased combinatorics and reduced momentum and vertex resolutions, which is why it has not been done until now. New approaches have been developed to meet the challenges and obtain worthwhile physics from these previously unused tracks. This talk presents the feasibility demonstration studies performed using Run 2 data, as well as new developments that expand these techniques for further gains in Run 3.
Measurements of Jet energy scale (JES) and resolution (JER) are presented, based on the legacy reconstruction of 13 TeV proton-proton collision data collected by CMS in 2016-2018.
Precision measurement of JES is of the utmost importance for the vast majority of physics measurements and searches at CMS. The high number of additional proton-proton interactions (event pileup), a harsh radiation environment, and time-dependent variations in detector components, all make precision JES measurement a challenging task. We present in-situ derivations of JES and JER based on collider data, as well as on simulated samples using various advanced techniques.
Jet and Missing transverse momentum (MET), used to infer the presence of high transverse momentum neutrinos or other weakly interacting neutral particles, are two of the most important quantities to reconstruct at a hadron collider. They are both used by many searches and measurements in ATLAS. New techniques combining calorimeter and tracker measurements, called Particle Flow and Unified Flow, have significantly improved the reconstruction of both transverse momentum and jet substructure observables. The procedure of reconstructing and calibrating ATLAS Anti-kt R=0.4 and R=1.0 jets using in situ techniques is presented. The reconstruction and performance in data and simulation of the MET obtained with different class of jets and different pile-up suppression schemes, including novel machine learning techniques, are also presented.
Leptons reconstruction performance plays a crucial role in the precision and sensitivity of the Large Hadon Collider (LHC) data analysis of the ATLAS experiment. The 139/fb of proton-proton collision data collected during the LHC Run-2 poses both a challenge and opportunity for the detector performance. Using di-electron and di-muon resonances we are able to calibrate to sub per-mil accuracy the detector response for electrons and muons. This talk will present recently released results significantly improving the measurement of lepton reconstruction, identification and calibration performance with innovative techniques. New analysis techniques are exploited which involve multivariate analyses for rejecting background hadrons from prompt leptons from the hard interactions as well as innovative in-situ corrections on data that reduce biases in muon momenta induced from residual detector displacements. These techniques are fundamental for improving the reach of measurements and searches involving leptons, such as Higgs decays to di-leptons and ZZ or high precision measurements of fundamental constants of the SM such as the Higgs and W masses or the Weinberg's weak mixing angle.
The Liquid Argon Calorimeters are employed by ATLAS for all electromagnetic calorimetry in the pseudo-rapidity region |ฮท| < 3.2, and for hadronic and forward calorimetry in the region from |ฮท| = 1.5 to |ฮท| = 4.9. They also provide inputs to the first level of the ATLAS trigger. After successful period of data taking during the LHC Run-2 between 2015 and 2018 the ATLAS detector entered into the a long period of shutdown. In 2022 the LHC will restart and the Run-3 period should see an increase of luminosity and pile-up up to 80 interaction per bunch crossing.
To cope with this harsher conditions, a new trigger readout path has been installed during the long shutdown. This new path should improve significantly the triggering performances on electromagnetic objects. This will be achieved by increasing the granularity of the objects available at trigger level by up to a factor of ten.
The installation of this new trigger readout chain required also the update of the legacy system. More than 1500 boards of the precision readout have been extracted from the ATLAS pit, refurbished and re-installed. The legacy analog trigger readout that will remain during the LHC Run-3 as a backup of the new digital trigger system has also been updated.
For the new system 124 new on-detector boards have been added. Those boards that are operating in a radiative environment are digitizing the calorimeter trigger signals at 40MHz. The digital signal is sent to the off-detector system and processed online to provide the measured energy value for each unit of readout. In total up to 31Tbps are analyzed by the processing system and more than 62Tbps are generated for downstream reconstruction. To minimize the triggering latency the processing system had to be installed underground. The limited available space imposed a very compact hardware structure. To achieve a compact system, large FPGAs with high throughput have been mounted on ATCA mezzanines cards. In total no more than 3 ATCA shelves are used to process the signal from approximately 34000 channels. Given that modern technologies have been used compared to the previous system, all the monitoring and control infrastructure is being adapted and commissioned as well.
This contribution will present the challenges of the installation, the commissioning and the milestones still to be completed towards the full operation of both the legacy and the new readout paths for the LHC Run-3.
The innermost tracking system of the CMS experiment, called the tracker, consists of two tracking devices, the Silicon Pixel and Silicon Strip detectors. The tracker was specifically designed to very accurately determine the trajectory of charged particles or tracks. This is achieved by ensuring an accuracy or so-called intrinsic resolution on the position measurement of the electrical signals registered in the detector modules as the particles pass through the tracker layers of 10 to 30 ยตm. The high-quality track reconstruction, in turn, paves the way for precise primary and secondary vertex reconstruction.
The closest detector in proximity to the interaction point, the Silicon Pixel detector, deals with the highest intensity of particle collisions and, therefore, suffers more extensively the effects of the radiation damage. To tackle these effects, the pixel was extracted from the CMS experimental cavern, underwent extensive repairs, was provided of a new innermost layer, and reinstalled during the LHC Long Shutdown 2. After the reinstallation, the accuracy in the knowledge of the geometrical position of the pixel modules needed to be improved, to reach the precision of the intrinsic resolution of the sensors stated above. This, together with the movements of the structures of the Silicon Strip detector driven by the maintenance work during the shutdown, made it necessary to correct the position, orientation, and curvature of the tracker modules in a process known as tracker alignment.
The strategies for and the performance of the CMS tracker alignment during the 2021-2022 LHC Commissioning preceding the Run 3 data-taking period are described. The results of the very first tracker alignment after the pixel reinstallation, performed with cosmic ray muons recorded at 0T magnetic field are presented. Also, the performance of the first alignment of the commissioning period with collision data events, collected during pilot test beam at a center of mass energy of 900 GeV, is presented. Finally, the tracker alignment effort during the final countdown to LHC Run 3 is discussed.
We develop a modified power-counting within the heavy quark effective theory (HQET),
that results in a highly constrained set of second-order power corrections in the heavy quark expansion, compared to the standard approach. We implement this modified expansion to determine all $\bar{B} \to D^{(*)}$ form factors, both within and beyond the Standard Model, to $\mathcal{O}(\alpha_s, \alpha_s/m_{c,b}, 1/m_{c,b}^2)$. Using measured $\bar{B} \to D^{(*)} \ell \bar\nu$ differential branching fractions for light leptons ($\ell = e$, $\mu$), we constrain not only leading and subleading Isgur-Wise functions, but also the $1/m_{c,b}^2$ corrections from subsubleading terms. We provide updated precision predictions for $\bar{B} \to D^{(*)} \tau \bar\nu$ decay rates, lepton universality ratios, and the CKM matrix element $|V_{cb}|$.
Employing the full $BABAR$ dataset we extract form-factors for $\overline{B}\to D^{(\ast)}\ell^m \overline{\nu}_\ell$ using the hadronic tagging method. For $\overline{B}\to D\ell^m \overline{\nu}_\ell$, a two-dimensional angular analysis is performed in both $q^2$ and the lepton helicity angle. The two $B\to D$ form factors are determined using a joint fit with available lattice data. This enables checking flavor SU3 relations using comparisons with HPQCD $B_s\to D_s$ form-factors. An updated value of $V_{cb}$ from $B\to D$ is also extracted. The $B\to D^\ast$ form-factor fits in the BABAR-19 publication [PRL123 (2019) 9, 091801] are updated, using newly available $w>1$ lattice data (MILC/FNAL, HPQCD, JLQCD). The $BABAR$+lattice results are compared with the $BABAR$-only fits in BABAR-19. Finally, a combined $B\to D^{(\ast)}$ fit using the full $BABAR$ data and a HQET parametrization with higher order corrections in $1/m_{b,c}$ are described.
The rate of semitauonic $B$ decays has been consistently above theory expectations since these decays were first measured. Recently significant differences between the forward-backward asymmetry in $B\rightarrow D^โe\nu$ and $B\rightarrow D^โ\mu\nu$ were also reported. Belle II data is well suited to probe such anomalies. The low-background collision environment along with the possibility of partially or fully reconstructing one of the two B mesons in the event offer high precision measurements of semileptonic $B$ decays. This talk presents recent Belle II results on lepton flavor universality tests based on inclusive decays.
We demonstrate a new event generator tool based on EvtGen that allow us to simulate new physics (NP) signatures in ${\bar B}\to D^{*+}\ell^- {\bar\nu}_\ell$ decays. Recent experimental results from Belle, Babar and LHCb have all pointed towards new physics in the weak semileptonic $b \to c$ transitions which urge for an immediate need of advanced analysis techniques which we achieve through this simulator. We have further used our Monte Carlo (MC) tool to study in detail the semileptonic decay with muon and electron in the final state. We have examined the signatures of new physics in the muon mode which are consistent with current data. Angular asymmetries such as $A_{FB}$, $S_3$, $S_5$ and $S_7$, that can be extracted from the fully reconstructed angular distribution, are found to be highly sensitive to the presence of NP. In order to reduce the dependence on form factor uncertainties, we introduce $\Delta$-observables for the angular asymmetries by taking the difference between the observables for the muon and electron final states. Throughout our analysis we have assumed that the electron mode decay is well described by the SM. Apart from analyzing the $\Delta$ observables for three distinct NP scenarios, We have additionally exhibited the prospects of probing such NP couplings with the future 50 ab$^{โ1}$ of Belle II data.
We discuss the effects of lepton flavor universality violation affecting $b\to c \tau \nu$ decays. Besides $R(D^{(\ast )})$ and $R(J/\psi)$ we also discuss $R(\Lambda_c)$, as well as the observables that can be extracted from angular distribution of the exclusive semileptonic decays of mesons and baryons, the measurement of which can help discriminating among various scenarios of New Physics. We pay special attention to hadronic uncertainties.
I discuss exclusive $B_c$ decays. FIrst I consider the case of semileptonic modes $B_c^+\to B_a\bar{l}\nu_l$ and $B_c^+\to B^*_a(\to B_a\gamma)\bar{l}\nu_l$, with $a=s,d$ and $l = e,\mu$ in the Standard Model (SM) and in its extension based on the low-energy Hamiltonian comprising the full set of dimension-6 semileptonic $c\to s,d$ operators with left-handed neutrinos. Heavy quark spin symmetry has been used to relate the relevant hadronic matrix elements and to exploit lattice QCD results on $B_c$ form factors. Optimised observables are selected, and the pattern of their correlations is studied to identify the effects of the various operators in the extended low-energy Hamiltonian.
Furthermore, I present the analysis for $B_c^+\to B^{(*)+}\nu\bar{\nu}$ decays, for which branching fractions of at most $\mathcal{O}(10^{-16})$ are predicted in the SM. New physics effects are discussed also in this case.
Perspectives on inclusive semileptonic $\Lambda_b^0$ decays will be briefly discussed.
We present the first determination of $|V_{cb}|$ from $q^2$-moments of inclusive $B \to X_c \ell \bar \nu_\ell$ decays, with $q^2$ denoting the dilepton invariant mass. The $q^2$ moments and the total rate are reparametrization invariant quantities and depend on a reduced set of non-perturbative parameters. This reduced set opens a way to extract these parameters up to $1/m_b^4$ purely from data and thereby reducing the uncertainty on $|V_{cb}|$. The first measurement of $q^2$ moments have been recently reported by the Belle experiment, and Belle II is also capable of carrying out similar measurements. These provide the necessary experimental input and in this contribution we present a first determination of $|V_{cb}|$ using this method. We also explore novel approaches to incorporate theory correlations using theory nuisance parameters, which allow the fit to probe many different assumptions about the correlation structure between moments of different orders and with different selection cuts.
Being motivated mainly by the LHC physics, the currently used Monte Carlo Event Generators (MCEGs) lack of the quark spin degree of freedom in their hadronization models. In the recent years, however, the importance of quark spin related effects in hadronization such as the Collins effect has been brought to light by a vivid theoretical and experimental activity. Remarkably, global analyses of Collins asymmetries in SIDIS measured by HERMES, COMPASS and JLAB experiments and the corresponding asymmetries measured in $e^+e^-$ annihilation to hadrons by BELLE, BABAR and BESIII experiments, have allowed for the extraction of both the transversity PDF, describing the transverse polarization of quarks in a transversely polarized nucleon, and the Collins fragmentation function, which describes the fragmentation of a transversely polarized quark in an unpolarized hadron.
To guide the interpretation of SIDIS and $e^+e^-$ data as well as to make predictions for experiments at future facilities such as the EIC, a MCEG capable of reproducing quark spin effects in hadronization is necessary. To achieve this goal, we have started a systematic implementation of spin effects in the hadronization part of the Pythia 8 event generator for the polarized SIDIS process via the external package StringSpinner, which is publicly available. Spin effects are enabled for pseudoscalar meson production by using the string+${}^3P_0$ model of polarized quark fragmentation and parametrizations of the transversity PDFs.
This talk is dedicated to a recent major development of StringSpinner which allows for the introduction of vector meson production and decay in the polarized Pythia 8 string fragmentation. After being validated, the package is used to simulate the Collins and dihadron asymmetries in SIDIS and a comparison with currently available data is shown.
General purpose Monte Carlo event generators are a vital component of the feedback loop between experimental measurement, where they are used to model detector effects and correct for them, and theory, where comparisons to data can inform further improvements in the models. However, most tuning exercises are performed on LHC or Tevatron data, with the most recent RHIC tune being the single-parameter modification of the PYTHIA-6 Perugia 2012 tune that is typically used in STAR. In this talk, we show a new underlying event tune โ the "Detroit" tune โ of PYTHIA-8 suitable for pp collisions at RHIC and LHC energies, and compare to a variety of measurements at mid-rapidity at RHIC, as well as the LHC and the Tevatron. We find, in general, that the Detroit tune offers an improvement on the default PYTHIA-8 Monash tune at RHIC energies, and outperforms Monash at large transverse momenta at LHC energies. At forward rapidities, neither tune is adequate to describe pion cross sections from BRAHMS and STAR. This leads to future opportunities to develop a refined parameter set that can describe both regions simultaneously, in order to be applicable to STAR data with the forward upgrade installed as of 2022, and eventually the Electron-Ion Collider.
This talk presents ATLAS recent measurements of distributions sensitive to Underlying event, the hadronic activity observed in relationship with the hard scattering in the event. The rates and the total transverse momentum was measured for Kaons, Lambda baryons and their ratios as a function of the leading track-jet and is compared to MC predictions which in general fail to describe the data. In addition, a new measurement of charged-particle distributions as a function of Upsilon momentum and different Upsilon states is presented using the full Run-2 ATLAS dataset at center-of-mass energy of 13 TeV. The measurement benefits from the heavy-ion style approach to remove combinatorial and pileup backgrounds leading to increased sensitivity. Technical challenges of the measurement will be shown, as well as the results and their physics implications.
Measurements of the production of hadrons containing beauty quarks in pp and p-Pb collisions provide an important test of quantum chromodynamics calculations. They also set the reference for the respective measurements in heavy-ion collisions, where the properties of the quark-gluon plasma are investigated. The excellent particle identification, track and decay-vertex reconstruction capabilities of the ALICE experiment, together with machine-learning techniques for multi-class classification, are exploited to separate the non-prompt D mesons e non-prompt $\Lambda_{\rm c}$ baryons (i.e. produced in beauty-hadron decays) from that of prompt D and $\Lambda_{\rm c}$ (produced in the charm-quark fragmentation). These measurements allow investigating the production and hadronization of beauty quarks in pp and p-Pb collisions. Machine-learning techniques also permit for the first time the measurement of the non-prompt $\rm{D^{*}}$ polarization that provides a baseline for future studies in Pb-Pb, and the first analysis of the non-prompt D-meson fractions as a function of multiplicity in pp collisions at $\sqrt{s}$=13 TeV.
The beauty production is also investigated via the measurements of b-tagged jets in pp and p-Pb and pp collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV. The final results on the b-jet production, the nuclear modification factor $R_{\rm pPb}$, and the fraction of b jets among inclusive jets down to $p_{\rm T}$ = 10 GeV/$c$, which is lower than in previous measurements of b jets done at the LHC, are discussed. The final measurements of the $ \rm{b \bar{b} } $ production cross section at midrapidity per unit of rapidity, compared to FONLL predictions and to NNLO calculations are also presented.
We report on calculations of differential cross sections for $c \bar c$- and $b \bar b$-dijet production in $pp$-scattering at $\sqrt{s} = 13$ TeV in the $k_T$-factorization and hybrid-factorization approaches with different unintegrated parton distribution functions (uPDFs). We present distributions in transverse momentum and pseudorapidity of the leading jet, rapidity difference between the jets and the dijet invariant mass. Our results are compared to recent LHCb data on forward production of heavy flavour dijets, measured for the first time individually for both, charm and bottom flavours. We found that an agreement between the predictions and the data within the full $k_T$-factorization is strongly related to the modelling of the large-$x$ behaviour of the gluon uPDFs which is usually not well constrained. The problem may be avoided following the hybrid factorization approach. Then a good description of the measured distributions is obtained with the Parton-Branching, the Kimber-Martin-Ryskin, the Kutak-Sapeta and the Jung setA0 CCFM gluon uPDFs. We calculate also differential distributions for the ratio of $c \bar c$ and $b \bar b$ cross sections. In all cases we obtain the ratio close to 1 which is caused by the minimal condition on jet transverse momenta ($p_{T}^{\mathrm{jet}} > 20$ GeV) introduced in the experiment, that makes the heavy quark mass almost negligible. The LHCb experimental ratio seems a bit larger. We discuss potentially important for the ratio effect of $c$- or $b$-quark gluon radiative corrections. The found effect seems rather small. More details can be found in our original paper [1].
[1] R. Maciuลa , R. Pasechnik and A. Szczurek, "Production of forward heavy-flavour dijets at the LHCb within $k_{T}$-factorization approach", arXiv:2202.07585 [hep-ph].
Recent results from the proton-proton collision data taken by the ATLAS experiment on the charmonium and B meson production and decays will be presented. The measurement of $J/\psi$ and $\psi(2S)$ differential cross sections will be reported as measured on the whole Run 2 dataset. The measurement of the differential cross sections of $B^+$ production at 13 TeV and their ratios to those measured at 7 TeV will be discussed. The measurement of the differential ratios of $B_c^+$ and $B^+$ production cross sections at 8 TeV will be shown. New results on the $B_c$ decays to $J/\psi\ D_s^{(*)}$ final states obtained with the Run 2 data at 13 TeV will also be reported.
Our understanding of hadronic collisions has been challenged by the intriguing observation of collective phenomena in events with high charged-particle multiplicity density in small systems. Such high multiplicities are expected in events with multiple parton-parton interactions (MPI). At the LHC, MPIs affect the production of heavy-quarks (charm and beauty), and the large statistics samples available allow for the study of quarkonium production in association with other particles as well as of their relation to the underlying event. In proton-proton (pp) collisions, the study of pair production of quarkonia in the same event, besides helping to disentangle among different production mechanisms, is sensitive to double-parton scattering. Multiplicity dependent studies of quarkonia are fundamental for investigating the correlations between soft and hard components of high-multiplicity events in small collision systems. In particular, excited quarkonium states, characterized by lower binding energies than the corresponding ground states, are more sensitive to any possible dissociation mechanism at play at high multiplicities.
In this contribution, new multiplicity dependent results of excited quarkonium states, such as $\psi$(2S), $\Upsilon$(2S) and $\Upsilon$(3S), reconstructed in pp and p-Pb collisions at forward rapidity, along with the corresponding excited-to-ground state ratios, will be presented. New measurements for J/$\psi$ will be also discussed. These include the first measurement of J/$\psi$ pair production in pp collisions at $\sqrt{s}$ = 13 TeV, as well as the latest results on J/$\psi$ production as a function of multiplicity at forward rapidity in pp collisions at $\sqrt{s}$ = 5.02 and 13 TeV. The status of similar multiplicity dependent measurements at midrapidity in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 will be shown. The comparison with available models will also be discussed.
We study the different mechanisms contributing to photoproduction of quarkonium pairs in NRQCD at the EIC, namely via unresolved and resolved photons. In the latter case, we study the relevance of double parton scatterings along the lines of our recent study on 4-jet photoproduction[1]. Like for the hadroproduction case [2,3,4,5,6,7], quarkonium-pair photoproduction probes, in different kinematical domains, very different interesting phenomena which will be made accessible at the future US EIC.
[1] F. A. Ceccopieri and M. Rinaldi, Phys. Rev. D 105 (2022) no.1, L011501 .
[2] J. P. Lansberg, H. S. Shao, N. Yamanaka, Y. J. Zhang and C. Noรปs, Phys. Lett. B 807 (2020), 135559.
[3] J. P. Lansberg, Phys. Rept. 889 (2020), 1-106.
[4] J. P. Lansberg, H. S. Shao, N. Yamanaka and Y. J. Zhang, Eur. Phys. J. C 79 (2019) no.12, 1006.
[5] H. S. Shao and Y. J. Zhang, Phys. Rev. Lett. 117 (2016) no.6, 062001.
[6] J. P. Lansberg and H. S. Shao, Nucl. Phys. B 900 (2015), 273-294.
[7] J. P. Lansberg and H. S. Shao, Phys. Lett. B 751 (2015), 479-486.
The Drell-Yan process offers an interesting opportunity to test the Standard Model (SM) and possibly reveal New Physics beyond it.
Indeed, dilepton production at high invariant masses is sensitive to beyond SM effects, while also being extremely well controlled both theoretically and experimentally and producing sufficient events for in-depth analyses.
In this talk I will present the recent calculation of mixed QCD-electroweak corrections to the neutral-current Drell-Yan production of a pair of massless leptons in the high invariant mass region.
At a relative high values of the dilepton invariant mass, $m_{\ell \ell} \sim 1$ TeV, we observe that these corrections can be well approximated by the product of QCD and electroweak corrections. Hence, thanks to the well-known Sudakov enhancement of the latter, they increase at large invariant mass and reach e.g. -3\% at $m_{\ell \ell} = 3$ TeV. I will discuss some technical aspects of the calculation, as well as results for fiducial cross sections and a selection of kinematic distributions.
The LUXE experiment (LASER Und XFEL Experiment) is an experiment in planning at DESY Hamburg using the electron beam of the European XFEL. LUXE is intended to study collisions between a high-intensity optical laser pulse and 16.5 GeV electrons from the XFEL electron beam, as well as collisions between the LASER pulse and high-energy secondary photons. This will elucidate Quantum Electrodynamics (QED) at the strong-field frontier, where the electromagnetic field of the laser is above the Schwinger limit. In this regime, QED is non-perturbative. This manifests itself in the creation of physical electron-positron pairs from the QED vacuum, similar to Hawking radiation from black holes. LUXE intends to measure the positron production rate in an unprecedented LASER intensity regime. The experiment received a stage 0 critical approvement (CD0) from the DESY management and is in the process of preparing its technical design report (TDR). It is expected to start running in 2024/5. An overview of the LUXE experimental setup and its challenges and progress will be given, along with a discussion of the expected physics reach in the context of testing QED in the non-perturbative regime.
The process ee->qq with qq=ss,cc,bb,tt plays a central role in the physics programs of high energy electron-positron colliders operating from the O(100GeV) to O(1TeV) center of mass energies. Furthermore, polarised beams as available at the International Linear Collider (ILC) are an essential input for the complete measurement of the helicity amplitudes that govern the production cross section. Quarks, specially the heaviers, are likely messengers to new physics and at the same time they are ideal benchmark processes for detector optimisation. All four processes call for superb primary and secondary vertex measurements, a high tracking efficiency to correctly measure the vertex charge and excellent hadron identification capabilities. Strange, charm and bottom production are already available below the ttbar threshold. We will show with detailed detector simulations of the International Large Detector (ILD) that production rate and the forward backward asymmetries of the the different processes can be measured at the 0.1% - 0.5% level and how systematic errors can be controlled to reach this level of accuracy. The importance of operating at different center of mass energies and the discovery potential in terms of Randall-Sundrum models with warped extra dimensions will be outlined.
In this talk, we will present state-of-the-art results for the QED Parton Distribution Functions (PDFs), which have been recently pushed up to next-to-leading logarithmic (NLL) accuracy. NLL PDFs properly take into account the mixing between the electron/positron with the photon and the other fermions, running-$\alpha$ effects and the dependence on the renormalisation and factorisation scheme. We will also discuss the inclusion of NLL PDFs in the automated MADGRAPH5_AMC@NLO framework, which have been equipped with next-to-leading order (NLO) electroweak corrections, and we will present first NLL+NLO predictions for physical observables at lepton colliders.
Belle II is considering upgrading SuperKEKB with a polarized electron beam. The introduction of beam polarization to the experiment would significantly expand the physics program of Belle II in the electroweak, dark, and lepton flavor universality sectors. For all of these future measurements a robust method of determining the average beam polarization is required to maximize the level of precision. The $BABAR$ experiment has developed a new beam polarimetry technique, Tau Polarimetry, capable of measuring the average beam polarization to better than half a percent. Tau Polarimetry strongly motivates the addition of beam polarization to SuperKEKB and could also be used at future $e^+e^-$ colliders such as the ILC.
The LHeC and the FCC-he offer singular possibilities for measurement of top properties and EW parameters in DIS, both due to their large centre-of-mass energies and high luminosities. In this talk we will review the most recent studies. We will revisit the determination of the top mass through inclusive measurements. In addition, we will address the possibilities for precise measurements of $๐๐ก๐$ and $\gamma ๐ก๐$ couplings, and competitive searches for FCNC top couplings. We will show the possibilities for precise measurement of EW parameters in a simultaneous PDF+EW fit, including the $W$, $Z$ and top mass, weak neutral current couplings and the effective EW mixing angle, and the unique possibilities for anomalous couplings in charged current DIS.
Reference: P. Agostini et al. (LHeC Study Group), The Large Hadron-Electron Collider at the HL-LHC, J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
This poster presents a search for Dark Matter produced in association with a Higgs boson decaying to b-quarks using the data corresponding to an integrated luminosity of 139/fb collected with the ATLAS detector in pp collisions at $sqrt{s}$ = 13 TeV at the Large Hadron Collider. The targeted events typically contain large missing transverse momentum and either two b-tagged small-radius jets or a single large-radius jet associated with two b-tagged subjets. No significant deviation from Standard Model expectations is observed. The results are interpreted in two benchmark models with two Higgs doublets extended by either a heavy vector boson $Z'$ or a pseudoscalar singlet $a$ and which provide a dark matter candidate $\chi$. Significant improvements in sensitivity have been achieved with respect to previous results owing to optimized event selections as well as advances in the object identification, such as the use of the likelihood-based significance of the missing transverse energy and variable-radius track jets. In the case of the Two-Higgs-Doublet model with an additional vector boson $Z'$, the observed limits extend up to a $Z'$ mass of 3.1 TeV at 95 % confidence level for a mass of 100 GeV for the Dark Matter candidate. For the Two-Higgs-Doublet model with an additional pseudoscalar $a$, masses of $a$ are excluded up to 520 GeV and 240 GeV for tan$\beta$ = 1 and tan$\beta$ = 10 and a Dark Matter mass of 10 GeV, respectively. In addition, limits on the visible cross sections are set and range from to 0.05 fb to 3.26 fb, depending on the missing transverse momentum and b-quark jet multiplicity requirements.
In this poster, we present the development of an ongoing search of dark matter particles with sub-GeV masses using the MicroBooNE detector. The MicroBooNE experiment is a liquid argon time projection chamber with great calorimetry and particle identification capacities located at Fermilab. We consider dark matter particles that would be produced through neutral meson decay by mesons coming from the NuMI beam. These particles can travel uninterruptedly to the MicroBooNE detector where they might scatter off an argon nucleus producing a lepton-antilepton pair. This distinguishable final state produced by the dark matter scattering has been named as the dark trident interaction. We explore two event selection strategies, one using a traditional boosted decision tree and a second one applying state of the art deep learning techniques. We make a performance comparison of both selection methods, working independently and combined. Finally, we present an estimated sensitivity of the MicroBooNE experiment to this dark matter interaction channel.
The PandaX-4T experiment, which is aiming to detect dark matter using a liquid xenon detector, is located in the China Jinping Underground Laboratory (CJPL). Various ultralow background technologies are used to control the intrinsic/surface backgrounds, including HPGe gamma spectroscopy, ICP-MS, NAA, radon emanation measurement system, krypton assay station and alpha detection system. Combining measured results and Monte Carlo simulation, electron recoil and nuclear recoil background from material intrinsic radioactivity, radon emanation and krypton are calculated.
In this poster, an overview of the PandaX-4T material screening program, surface background control and background analysis will be presented.
A search for Dark Matter (DM) produced in association with top quarks, with a focus on the dileptonic channel, is presented. This kind of search provides sensitivity to models where the DM couples to the Standard Model (SM) via a spin-0 mediator with a yukawa coupling, which can arise in a number of BSM physics scenarios, for example the 2HDM+a model. This analysis is part of the CMS search covering the dileptonic, semileptonic and full hadronic final states with the full Run-2 dataset, which combines for the first time the top quark pair + DM and single top + DM processes, greatly improving sensitivity to the highest mediator masses in the search.
The dileptonic channel poses an interesting challenge due to a large amount of missing transverse momentum in the SM tt background, and an irreducible ttZ (Zโฮฝฮฝ) background. This analysis therefore uses novel variables and machine learning techniques in the signal extraction, and new control regions to constrain the irreducible backgrounds.
We discuss the correlation between the $B \to K^{(\ast)} \nu \nu$ and $B \to K^{(\ast)} \ell \ell$ decay modes in the Standard Model (SM) and its several popular extensions. This helps obtaining more accurate SM determination of $\mathcal{B}(B \to K^{(\ast)} \nu \nu)$ which is useful in view of the upcoming experimental measurement at Belle-II. In addition to that, we also show the impact of the measurements of $R(K^\pm)$ and $R(K^0)$ on the scenarios with extra sterile neutrinos.
The LHeC is the proposal of an upgrade of the HL-LHC to provide electron-hadron collisions with centre-of-mass energies $\mathcal{O}(1)$ TeV and instantaneous luminosities $\mathcal{O}(10^{34})$ cm$^{-2}$s$^{-1}$. The existing design identifies IP2 as the interaction point. In this talk we present initial accelerator considerations on a common IR to be built which alternately could serve $๐โ$ and $โโ$ collisions at the HL-LHC, while other experiments would stay on $โโ$ in either condition [1]. A forward-backward symmetrised option of the LHeC detector is sketched which would permit extending the LHeC physics programme to also include aspects of hadron-hadron and heavy-ion physics.
[1] K. D. J. Andre et al., An experiment for electron-hadron scattering at the LHC, Eur. Phys. J. C 82 (2022) 1, 40, e-Print: 2201.02436 [hep-ex].
We propose and describe a dark matter particle which is consistent with current experiment and observation, and which should be detectable within the next 1-5 years [1,2]. This particle is unique in that it has (i) precisely defined couplings and (ii) a well-defined mass of about 72 GeV. It has not yet been detected because it has no interactions other than second-order gauge couplings, to W and Z bosons. However, these weak couplings are still sufficient to enable observation by direct detection experiments which should be fully functional within the next few years, including XENONnT, LZ, and PandaX. The cross-section for collider detection at LHC energies is small (roughly 1 fb) but observation may ultimately be achievable at the high-luminosity LHC, and should certainly be within reach of the even more powerful colliders now being planned. It is possible that the present dark matter candidate has already been observed via indirect detection: Several analyses of gamma rays from the Galactic center, observed by Fermi-LAT, and of antiprotons, observed by AMS-02, have shown consistency with the interpretation that these result from annihilation of dark matter particles having roughly the same mass and annihilation cross-section as the present candidate. Finally, there is consistency with the observations of Planck, which have ruled out many possible candidates with larger masses. The most promising signature for collider detection appears to be missing transverse energy of > 145 GeV accompanied by two jets, following creation through vector boson fusion. The most promising mechanism for direct detection appears to be a one-loop process involving exchange of two vector bosons. The present dark matter particle and the lightest susy neutralino (as well as an axion-like particle) can stably coexist in a multicomponent dark matter scenario, which results from a fundamental picture which predicts both an extended Higgs sector and supersymmetry [3].
[1] Reagan Thornberry, Maxwell Throm, Gabriel Frohaug, John Killough, Dylan Blend, Michael Erickson, Brian Sun, Brett Bays, and Roland E. Allen. โExperimental signatures of a new dark matter WIMPโ, EPL (Europhysics Letters) 134, 49001 (2021).
[2] Caden LaFontaine, Bailey Tallman, Spencer Ellis, Trevor Croteau, Brandon Torres, Sabrina Hernandez, Diego Cristancho Guerrero, Jessica Jaksik, Drue Lubanski, and Roland E. Allen, โA Dark Matter WIMP That Can Be Detected and Definitively Identified with Currently Planned Experimentsโ, Universe 7, 270 (2021).
[3] Roland E. Allen, โPredictions of a fundamental statistical pictureโ, arXiv:1101.0586 [hep-th].
MUonE is a proposed experiment which aims at an independent and precise determination of the muon g โ 2 , based on the measurement of the hadronic contribution to the running of the electromagnetic coupling constant in the space-like region. This can be achieved by measuring with extremely high accuracy the shape of the differential cross section of the ฮผe elastic scattering, using a 160 GeV muon beam available at CERN, off atomic electrons of a light graphite target. Geant4 simulations are required in order to predict the level of noise which will be present in the proposed experiment. For this reason, a preliminary version of the final MUonE setup has been simulated with Geant4 10.7, the latest version containing relevant updated settings. This is the recommended version for the study of MUonE due to its correct estimation of the angular distribution of the e+e- production from muon interactions in the graphite material. In this talk, two related studies utilizing the Geant4 emstandard_opt4 physics list will be presented, one involving direct standalone tests and another one employing simulation and reconstruction using the new FairRoot release with a Geant4 implementation. In both cases, the Geant4 10.7 version has been compared to older versions in terms of energy distributions and angular correlations. Finally, prospects for the comparison of the simulation with the results of future beam tests and test runs will be discussed.
We analyze a flavor symmetric model to understand neutrino masses and mixing based on the $A_4$ discrete symmetry. Here both minimal type-I seesaw and scotogenic mechanisms contribute towards explaining tiny light neutrino mass. The minimal type-I seesaw generates tribimaximal neutrino mixing at the leading order. The scotogenic contribution acts as a deviation from this first-order approximation of the lepton mixing matrix to yield the observed non-zero $\theta_{13}$, and to accommodate a potential dark matter candidate. Apart from predicting interesting correlations between different neutrino parameters as well as between neutrino and the model parameters, the model also predicts the specific values for absolute neutrino masses, leptonic Dirac, and Majorana CP phase, and effective mass parameter appearing in the neutrinoless double beta decay.
To investigate the Euclid Near Infrared Spectrometer and Photometer (NISP) capabilities, Spectral Energy Distribution (SED) models of galaxies located at 0.3 โคz โค2.5 have been constructed, simulated using the TIPS simulator of the NISP red grism, and analyzed focusing on emission lines measurements.
These simulations will enable evaluating the spectroscopic survey performances of the Euclid mission and confirming that the slitless NISP spectrometer will match the requirements in terms of detection limits for the continuum and nebular emission lines.
The construction of the SEDs to be provided to the simulator consists in computing a continuum using the Bruzual & Charlot(2003) models, calling out best-fit SED parameters available in the publicly released catalogs from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS), namely the BARRO2019 (Barro et al. 2019) covering the Great Observatories Origins Deep Survey North field (GOODS-N) and the COSMOS2015 (Laigle et al. 2016). The nebular emission Balmer, [NII]ฮปฮป6584,6549, [OII]ฮปฮป3727,3729, [OIII]ฮปฮป5007,4959, [SII]ฮปฮป6731,6717, [SIII]ฮปฮป9531,9069 and Paschen lines are added making use of calibrations available in literature. We refer to common tools and indicators for emission lines analysis such as the Star Formation Rate (SFR), the Baldwin-Phillips-Terlevich (BTP) diagram, the Mass-Metallicity Relation (MZR) and photoionization models. The emission lines are then integrated to the continuum accounting for the calculated velocity dispersion of each galaxy. A photometric and spectroscopic comparison and calibration of the constructed SEDs with observational data is then applied to ensure a realistic sample distribution of the fluxes calculated. The 1D simulated spectra are obtained using the Euclid official reduction pipeline. These simulations are part of the Euclid Consortium pre-launch efforts to characterize systematics through an end-to-end analysis.
We provide a confirmation of the detection limit specifications for the continuum (i.e. H(AB) โฅ19.5 mag) and emission lines (i.e. Flux โฅ 2 ร10โ16 erg cm-2 s-1 for the Euclid Wide Survey configuration and of Flux โฅ 6 ร10โ17 erg cm-2 s-1 for the Deep Field survey configuration. We also provide an estimate of the NISP spectral resolution and its dependance on morphological parameters (e.g. Disk size) and present an analysis stacking spectra located at 1.8 โค z โค1.95, attesting the great potential of the method in confirming redshift determination, a crucial aspect for the Euclid mission.
Based upon simulations and calculations and a detection scheme reported earlier and some recent modifications, the proposal of a narrow-mass range search for invisible axions (or Axion-Like Particles, ALPโs) is presented here with high potential for success. Our model is based upon the central assumption that the axionic field (or the ALP field) is the dominant field with observable density that permeates our local neighborhood and thus the local axion density is the density of the local light cold dark matter. In a narrow axion Compton frequency window of 18.5 to 19.5GHz (within the Ku microwave band), corresponding to an axion mass range of 76.8 to 81.0micro-eV, centered at the value hereby suggested in this report as the most likely value for axions. We are confident that if an axionic/axion-like field exists, it most likely could be found at this mass value/Compton frequency prediction.
The ATLAS Visitor Centre at CERN is a guided exhibition space that has been welcoming visitors from around the world since 2009. In a recent effort, ATLAS has reinvented the whole exhibition, replacing the original installation with a completely new exhibition. This contribution will highlight the basic concept behind the new exhibition, introduce its main components along with details on their implementation and hint at the process of getting from an idea to the final implementation. This contribution will also present some of the efforts to make the exhibition more inclusive and accessible to a wider and more diverse audience.
We have developed a novel approach to reconstruct events detected by a water-based Cherenkov detector such as Super- and Hyper-Kamiokande using an innovative deep learning algorithm. The algorithm is based on a Generative Neural Network whose parameters are obtained by minimizing a loss function. In the training process with simulated single-particle events, the Generative Neural Network is given the particle identification (ID), 3d-momentum (p), and 3d-vertex position (V) as the inputs for each training event. Then the network generates a Cherenkov event that is compared with the corresponding true simulated event. Once the training is done, for the given Cherenkov event the algorithm will provide the best estimate on ID, p, and V by minimizing the loss function between the given event and the generated event over ranges of input values of ID, p and V. The algorithm serves as a type of fast simulation for a water Cherenkov detector with a fewer number of assumptions than traditional reconstruction methods. We will show some of the algorithm's excellent performance in addition to the architecture and principle of the network.
We propose a novel probabilistic model for the reconstruction of point-source events, with the dependence of scintillation light time response curve on the PE number deduced from first principles. It follows naturally from the time response curve and is unbiased.
The Jiangmen Underground Neutrino Observatory(JUNO) detector is 20 kton underground liquid scintillator detector, with the primary physics goal of determining the neutrino mass hierarchy. At JUNO, the model is applicable to small PMTs with first photoelectron time and charge integration readouts, but can also be used for fast reconstruction of large PMTs with waveform readouts. We evaluate the performance based on JUNO Monte Carlo simulations.
The proposed magnetised Iron Calorimeter (ICAL) detector in the India Based Neutrino Observatory (INO) aims to study neutrino oscillation parameters through interactions of atmospheric neutrinos at ICAL. We present here the first results of tau neutrino events analysis at ICAL. We have written a C-based Monte Carlo neutrino event generator for the proposed detector. It generates Charged Current (CC) and Neutral Current (NC) using Honda neutrino flux weighted by the cross section in Deep Inelastic Scattering (DIS) regime. Using our generator we have shown the significance of the of tau neutrino flux due to neutrino oscillations over the NC background in DIS regime in atmopsheric neutrinos. We also carried out our study using the events sample generated by the existing NUANCE neutrino event generator which includes the events from Elastic, Quasi Elastic and DIS regimes. From the analysis with NUANCE events we have established the detection capability for tau neutrino events is over 3 sigma confidence level for a period of 10 years. We have also studied the increase in reach of ICAL to the neutrino oscillation parameter $\sin^2\theta_{23}$ and $\Delta m_{32}^2$ on including the tau events and showed how this overcomes the limitation due to systematic uncertainties. Finally we have shown the significant improvement in oscillation parameters measurement from the combined study of mu neutrino events and tau neutrino events.
Silicon photomultipliers (SiPM) are candidates selected as the potential photodetector technology for the dual-radiator Ring-Imaging Cherenkov (dRICH) detector at the future Electron-Ion Collider (EIC). SiPM optical readout offers a large set of advantages being cheap devices, highly efficient and insensitive to the high magnetic field (~ 1.5 T) at the expected location of the sensors in the experiment. On the other hand, SiPM are not radiation tolerant and despite the integrated radiation level is expected to be moderate (< 10ยนยน 1-MeV neq/cm2) it should be tested whether single photon-counting capabilities and the increase in Dark Count Rate (DCR) can be kept under control to maintain the optimal dRICH detector performance across the years.
Several options are available to maintain the DCR to an acceptable rate (below ~100 kHz/mm2), namely by reducing the SiPM operating temperature, using the timing information with high-precision TDC electronics, selection cuts based on bunch crossing information, and by recovering the radiation damage with high-temperature annealing cycles.
In this presentation we present the current status of the research and the first results on studies performed on a large sample of commercial (Hamamatsu) and prototype (FBK) SiPM sensors. The devices have undergone an irradiation campaign where an increasing NIEL dose up to 1011 1-MeV neq/cm2 has been delivered to different sensor subsets. The sensors have then undergone high-temperature annealing cycles to recover the radiation damage. The results obtained with a complete readout system based on the first 32-channel prototypes of the ALCOR ASIC chip are also reported.
We studied the four models implemented in PYTHIA8 for the production of dark matter or associated particles at the LHC based on the simplest extensions of the Standard Model. The first model includes dark matter production via s-channel mediators. This includes production in association with a jet for a vector boson or scalar mediator. Aside from the standard simplified models where the dark matter is accompanied by a new s-channel mediator, two other models were also studied where the dark matter particle is accompanied by charged partners that may be produced via Drell Yan production. The fourth model is a generalized model of mixed dark matter where the dark matter is a mixture of an SU(2) singlet and N-plet. We find that the last two models are also ideally suited to study the production of a range of long-lived particle signatures.
Some mesons produced by the interaction between primary cosmic rays and the air molecules in the upper atmosphere decay into muons without further interaction. The density of the atmosphere decreases as the temperature of the atmosphere increases, reducing the chance of secondary cosmic-ray particles interacting with atmospheric molecules and hence increasing the chance of decaying into muons. Experimental results show that there is a positive correlation between the muon flux and the effective atmospheric temperature. This phenomenon has been observed in the Daya Bay experiment and the correlation coefficient under different overburdens was measured using approximately two years of data. This poster will report the status of a more precise measurement of the correlation coefficient with more data in the Daya Bay Reactor Neutrino Experiment.
Junior high school (scuola media) represents the weakest sector of the education system in Italy and perhaps in other countries.
This is largely due to the absence of a specific formation of the teachers during their academic studies, at variance with what occurs for the primary school. This holds also for the teaching of science in general and of physics in particular. In most cases science teachers have a master degree in biology or maths and have to cover a huge program, touching in principle all disciplines of hard and life sciences for which, during their past studies, they did not receive a specific formation neither at the level of the content nor of the didactic methodology. As a result, teachers, perceiving the weakness of their formation, tend to avoid addressing in their lectures important aspects of physics and do not adopt more stimulating approaches than a traditional frontal lecture.
Furthermore, students of this age (11-13) are living a period in which they have to perform choices which will influence the rest of their life. Hence, offering them the opportunity to discover the beauty and importance of science, permeating any aspects of everyday life, may have an even stronger social impact than outreach activities devoted to high-school students, usually focused on more advanced topics of modern physics.
Finally, most schools have very limited laboratory equipment in their science classrooms, rarely exploited for an experimental learning of STEM disciplines. Hence, as INFN researchers occasionally called to collaborate with single schools, a few years ago we decided to collect our experiences and to transform this limitation into an opportunity, planning experimental activities which can be carried out in any contest - even in the absence of a laboratory, exploiting cheap materials which can be found at home by any student - but which have the potential to let the students discover by themselves how science enters into any aspect of everyday life.
This approach to the teaching of science (and physics in particular) requires an appropriate training. Hence, in fall 2017 a first edition of the INFN "AggiornaMenti" program started in Torino, a course on a cooperative "learning-by-doing" approach to the teaching of science devoted to junior high school teachers. After that first edition AggiornaMenti has become a national INFN project supported by the INFN outreach commission which currently involves 10 local sections of the institute across Italy. Each year, even during the pandemic in which the activity was carried out mostly in online format, about 100 science teachers have attended our education program, which includes hands-on activities covering any aspect of classical physics: mechanics, fluid dynamics, thermodynamics, acoustics, optics, electromagnetism...Whenever possible experiences displaying the connections of physics with life and earth sciences are proposed. Since 2021 the project has offered also an online coding school on Scratch and Arduino proposed by the INFN-Ferrara section (about 30 participants/year) which can be attended by the participants of the other local editions or also as a standalone module.
Our hands-on approach to the learning of science, based on practical activities carried our in small groups, is suited to valorize a broader range of skills than a traditional frontal lecture and this led some of the participant teachers to partially change the evaluation criteria of their students: the most brilliant in studying a book are not necessarily the most brilliant in solving a practical problem occurred during an experiment. Beside making the students discover new skills, this mixing of expertise is also what is necessary in any scientific collaboration.
During these years fruitful collaborations with other social and educational agencies like Fondazione Golinelli (www.fondazionegolinelli.it), Next-Level (www.next-level.it) and Laboratorio Scienza (www.laboratorioscienza.it) have been established, which provided us a precious help in planning the activities and in establishing connections with networks of schools, in particular in territories characterized by a strong social fragility.
A general overview of our project can be found in our webpage aggiornamenti.to.infn.it, currently under upgrade.
Our YouTube channel (https://www.youtube.com/channel/UCuN0rpzvEuC57HDFObRumGA) and our Facebook page (https://www.facebook.com/AggiornaMenti-1873114119437696) contain public multimedia materials produced over these last years, in particular during the pandemic period in which the demand of online contents accessible to the schools suddenly increased.
In this talk the INFN "AggiornaMenti" program devoted to the education of junior high school teacher will be presented, showing also the results of a feedback survey among the participants of the past editions of the project. Ideas and suggestions from analogous experiences carried out in other countries are welcome.
Making the large datasets collected at the LHC accessible to the public is a considerable challenge given the complexity and volume of data. Yet to harness the full scientific potential of the facility, it is essential to enable meaningful access to the data by the broadest physics community possible. Here we present a tool, the LHCb NTuple Wizard, which leverages the existing computing infrastructure available to the LHCb collaboration in order to enable third-party users to request derived data samples in the same format used in LHCb physics analysis. An intuitive web interface allows for the discovery of accessible datasets and guides the user through the process of specifying a request for producing NTuples: an ordered set of particle or decay candidates cataloging measured quantities chosen by the user. Issues of computer security and access control arising from offering this service are addressed within its design, while still offering datasets suitable for scientific research through the CERN Open Data Portal.
Although many suggestions for BSM searches at future colliders exist, most of them concentrate on additional scalars that have masses higher than the current SM scalar mass. I will give a short overview on the current status of models and searches for scalars with masses below this. Based on https://arxiv.org/abs/2203.08210
Analysis of anisotropy of the arrival directions of galactic positrons and electrons has been performed with the Alpha Magnetic Spectrometer on the International Space Station. These results differentiate between point-like and diffuse sources of cosmic rays for the explanation of the observed excess of high energy positrons. The AMS results of the dipole anisotropy are presented along with the discussion of the implications of these measurements
DAMA/LIBRA has consistently reported an observation of annual modulation in residual event rate over 20 years but no definite evidence from other experiments. Apart from the dark matter hypothesis, recent studies reported the possibility of the annual modulation of DAMA/LIBRA due to the slowly varying time-dependent background after subtracting the average background each year. Here, we present the COSINE-100 annual modulation using a similar method of the DAMA/LIBRA. We also generated a simulated pseudo data for the DAMA/LIBRA at 2-6 keV without dark matter signal by assuming the same background compositions. We observe annual modulation with similar amplitude but opposite phase.
The actual and next decade will be characterized by an exponential increase in the exploration of the Beyond Low Earth Orbit space (BLEO). Moreover, the firsts tentative to create structures that will enable a permanent human presence in the BLEO are forecast. In this context, a detailed space radiation field characterization will be crucial to optimize radioprotection strategies (e.g., spaceship and lunar space stations shielding, Moon / Mars village design), to assess the risk of the health hazard related to human space exploration and to reduce the damages potentially induced to astronauts from galactic cosmic radiation. On the other side, since the beginning of the century, many astroparticle experiments aimed at investigating the unknown universe components (i.e., dark matter, antimatter, dark energy) have collected enormous amounts of data regarding the cosmic rays (CR) components of the radiation in space. Such experiments essentially are actual cosmic ray observatories. The collected data (cosmic ray events) cover a significant period and permit to have integrated information of CR fluxes and their variations on time daily. Further, the energy range is exciting since the detectors operate using instruments that allow measuring CR in a very high energy range, usually starting from the MeV scale up to the TeV, not usually covered by other space radiometric instruments. Last is the possibility of acquiring knowledge in the full range of the CR components and their radiation quality The collected data contains valuable information that can enhance the space radiation field characterization and, consequently, improve the radiobiology issues concerning one of the most relevant topics of space radiobiology represented by the dose-effect models. In this talk, the status of the art in this topic will be presented as well a related research topic initiative titled "Astroparticle Experiments to Improve the Biological Risk Assessment of Exposure to Ionizing Radiation in the Exploratory Space Missions". We launched in December 2021 on three different Frontiers Journal (Astronomy and Space Science/Astrobiology, Public Health/Radiation and Health, Physics/Detectors and Imaging).
The algorithm used in the alignment of the Inner Detector of the ATLAS experiment is based on the track-to-hit residual minimization in a sequence of hierarchical levels (ranging from mechanical assembly structures to individual sensors). It aims to describe the detector geometry and its changes in time as accurately as possible, such that the resolution is not degraded by an imperfect positioning of the signal hit in the track reconstruction. The ID alignment during Run2 has proven to describe the detector geometry with a precision at the level of $\mu m$ [1]. The hit-to-track residual minimization procedure is not sensitive to deformations of the detectors that affect the track parameters while leaving the residuals unchanged. Those geometry deformations are called weak modes. The minimization of the remaining track parameter biases and weak mode deformations has been the main target of the alignment campaign in the reprocessing of the Run2 data. New analysis methods for the weak mode measurement have been therefore implemented, providing a robust geometry description, validated by a wide spectrum of quality-assessment techniques. These novelties are foreseen to be the new baseline methods for the Run3 data-taking, in which the higher luminosity would allow an almost real-time assessment of the alignment performance. [1] Eur. Phys. J. C 80, 1194 (2020)
For Run 3 data taking the track reconstruction algorithm used for
the ATLAS Inner Detector has been optimized with a particular focus on
minimizing the number of erroneous and low-quality tracks processed by
rejecting them as early as possible. This ensures a collection of high-quality
tracks to downstream reconstruction and physics, a key aspect in ATLAS. This
poster is dedicated to the modeling of track reconstruction in the core of
high transverse momentum jets, presenting new measurements of the charged
particle reconstruction inefficiency and fake rate inside jets.
The Virtual Visit service run by the ATLAS Collaboration has been active since 2010. The ATLAS Collaboration has used this popular and effective method to bring the excitement of scientific exploration and discovery into classrooms and other public places around the world. The programme, which uses a combination of video conferencing, webcasts, and video recording to communicate with remote audiences, has already reached tens of thousands of viewers, in a large number of languages, from tens of countries across all continents. We present a summary of the ATLAS Virtual Visit service that is currently in use โ including a new booking system and hand-held video conference setup from the ATLAS cavern โ and present a new system that is being installed in the ATLAS Visitors Centre and in ATLAS cavern. In addition, we show the reach of the programme over the last few years.
The new ATLAS pixel detector that will operate in the HL-LHC will consist of 5 barrel layers and several end-cap disks, equipped with pixel modules. New strategies are under development to safely and accurately load these pixel modules on carbon-based local-support structures. The local supports provide both, support and cooling to the modules. An efficient thermal path between the module and the local support must be guaranteed to ensure the optimal performance of the modules. Therefore, the interface (adhesive) between the module and the local support must be optimized to mechanically fix the modules and to function as an efficient thermal path. In this contribution the strategies used to load modules in prototypes and their evaluation will be discussed and results presented.
We summarize the status of automated NLO SM corrections for hadron and lepton collider processes in the multi-purpose event generator WHIZARD. The focus will be on NLO EW and QCD-EW mixed corrections at the LHC. Also, recent progress on the inclusion of EW corrections in future lepton collider processes and on the POWHEG-matched event generation in the NLO automated setup will be discussed.
High Energy Accelerator Research Organization (KEK) launched an education project for the fabrication of an accelerator named "AxeLatoon" in 2020 together with the National Institute of Technology (KOSEN). This project aims to improve engineering skills of students and foster the next generation of accelerator researchers by providing hands-on training in the field of accelerator science.
In the first year, we collaborated with the NIT (KOSEN), Ibaraki College to build an accelerator. Students took the initiative in this extracurricular activity and challenged building an accelerator. From 2021, we expanded this project to other prefectures and four schools are now participating. The design and fabrication of a small cyclotron accelerator is currently underway.
Despite the restrictions on activities and the limited mobility of people due to the novel coronavirus pandemic, the project continues to educate students about basic technologies and accelerators. We are holding seminars a few times a month utilizing online communication tools.
In this poster, we would like to share the status of AxeLatoon's activities based on the actual production of students at KOSEN and deepen the discussion on accelerator outreach programs.
Indian Scintillator Matrix for Reactor Anti-Neutrinos (ISMRAN) is an above-ground antineutrino experiment at very short baselines located at Dhruva reactor facility in Bhabha Atomic Research Centre, Mumbai. ISMRAN detector setup consisting of an array of 9ร10 optically separated 100 cm long with a cross-section of 10 ร 10 $cm^2$ Gd-wrapped plastic scintillator bars(PSBs) and enclosed by a shielding made of 10 cm thick lead and 10 cm thick borated polyethylene, mounted on a movable base structure, situated at ~ 13 m away from the reactor core. Antineutrinos are detected via the inverse beta decay (IBD) process which provides a time correlated signal pair consisting of a positron energy deposition and a delayed neutron capture in the plastic scintillator, both of which are recorded by each double-ended PMT segment. This experiment's physics goals include searching for potential short-baseline oscillations by the existence of sterile neutrinos and precisely measuring the antineutrino energy spectrum from a natural uranium fuel based thermal reactor. The excess of antineutrino events in data compared to predictions particularly at 5โ7 MeV in the measured positron energy spectrum will also be addressed using ISMRAN detector setup.
In this article, we will present the optical model, energy resolution model and energy non-linearity model of PSB. We will also discuss the natural radioactive and cosmogenic backgrounds, based on their energy deposition, number of bars hit as well as topological event selection criteria in position and time. Reconstructed sum energy spectrum and number of bar hits for different radioactive gamma sources such as ${}^{22}Na$ and ${}^{60}Co$ has been compared with Geant4 based Monte Carlo(MC) simulations. The sum energy, number of bar hits and energy ratios variables for the cascade gamma-rays from the n-Gd capture process are reconstructed using Am/Be neutron source and compared with the MC simulation in Geant4. These experimentally measured results will be useful for understanding the detector energy response and discriminating the correlated and uncorrelated background events from the true IBD events in reactor ON and OFF condition inside the reactor hall.
Elementary magnetic monopoles have been a question of electromagnetism for the last 150 years. However, most monopoles have been searched in the large mass and large magnetic charge region during the period but have not been discovered yet. Therefore, assuming that monopoles may exist in the low mass and low charge regions, we designed an experiment to search for elementary magnetic charges with mass below the electron mass ($m_e$) and charge below the electron charge ($e$). In this talk, we will describe the design for the experiment, and present the prediction of event rates, energy resolutions of detectors, and potential backgrounds estimated with GEANT 4 simulations.
As we enter the era of precision at the LHC, excluding specific charge-parity (CP) scenarios is no longer enough: we want to detect and precisely measure the angle that determines the possible admixture of CP-even and CP-odd components in the Higgs-top Yukawa coupling. The Higgs boson production in association with top-quarks ($t\bar{t}H$ and $tH$), in the $H\rightarrow b\bar{b}$ decay channel, offers a unique possibility to study this interaction since it depends only on Yukawa couplings and relies on the tree-level couplings between the Higgs and the fermions. Targeting events where one or both top quarks decay leptonically provides a handle to reconstruct the top-quarks, whose four-momenta can be used to construct CP-sensitive observables. In this communication, the first measurement of the CP-mixing angle in the $t\bar{t}H(H\rightarrow b\bar{b})$ channel using the full Run-2 dataset collected by the ATLAS experiment will be presented.
We perform the shadow calculation in a quantum corrected black hole background and at the same time give a generalised prescription for shadow calculation in black holes in an expanding universe. We apply the method of calculation of shadow in the case of a loop quantum gravity motivated regular black hole. In the process, we also construct the rotating loop quantum gravity inspired solution of the originally proposed static spherically symmetric LQG inspired black hole by applying the modified Newman-Janis algorithm. We study the quantum effects on the shadows of both the non-rotating and rotating loop quantum black hole solutions. It is observed that the general shape of the shadow for non-rotating AOS black hole is circular in shape as is expected for its classical counter part, but the presence of LQG inspired modification contracts the shadow radius and the effect reduces with the increase in the mass of the black hole. On a similar note, in the rotating situation, we find contraction in shadow radius due to quantum effects and the tapered nature of the shadow as expected from the classical Kerr case. However, instead of the symmetrical contraction, like non-rotating one, we found more contraction on one side relative to the other when we compare our result with the shadow of the Kerr black hole. We finally studied super-radiance in the rotating background and observed that the super-radiance condition for massless scalar field is identical to that of the Kerr case with the rotation of the BH being more compared to Kerr in the low mass regime.
Quasinormal modes (QNMs), the damped oscillations in spacetime that emanate from a perturbed body as it returns to an equilibrium state, have served for several decades as a theoretical means of studying n-dimensional black hole spacetimes. These black hole QNMs can in turn be exploited to explore beyond the Standard Model (BSM) scenarios and quantum gravity conjectures. With the establishment of the LIGO-Virgo-KAGRA network of gravitational-wave (GW) detectors, there now exists the possibility of comparing computed QNMs against GW data from compact binary coalescences. Encouraged by this development, we investigate whether QNMs can be used in the search for signatures of extra dimensions. To address a gap in the BSM literature, we focus here on higher dimensions characterised by negative Ricci curvature. As a first step, we consider a product space comprised of a 4D Schwarzschild black hole spacetime and a 3D nilmanifold (twisted torus); we model the black hole perturbations as a scalar test field. We find that the extra-dimensional geometry can be stylised in the QNM effective potential as a squared mass-like term. We then compute the corresponding QNM spectrum using three different numerical methods and determine constraints for the extra dimensions for a toy BSM model.
For more than half a century, expensive and bulky modules (e.g. the standard NIM, Nuclear Instrumentation Modules) and electronic boards have been used in nuclear physics laboratory courses, in order to filter, shape and digitize the analog signals coming from particle detectors. Recently it has become technically possible to miniaturize these circuits within ASICs, but their high cost and specificity make them unsuitable in a didactic and general-purpose context.
In this contribution we present an innovative system for reading and processing the signals produced by radiation detectors, which is based on simple, cheap and versatile components. The system is based on the "Red Pitaya STEMlab 125-14" [1] a compact size board which implements: a CPU, a FPGA, a port for network connections (useful for remote access and control) and two 125 MS/s 14-bit digitizer channels. The software framework, necessary for the acquisition, processing, and storage of the signals, is based on the "abcd" [2] acquisition system.
This system was experimentally tested in the Nuclear Physics Laboratory course of the Bachelorโs Degree in Physics at the Insubria University, in Como (Italy). In particular, it was used to read the signals produced by a silicon photodiode and a LaBr$_3$ scintillator in alpha and gamma spectroscopy experiments.
The system performance resulted to be equivalent to the one obtained with the traditional VME spectroscopic system. The main advantages of this new approach concern the compactness, versatility, and low cost, making it ideal also for high school laboratories.
References:
[1] Red Pitaya STEMlab 125-14, URL: https://redpitaya.com/stemlab-125-14/
[2] C. L. Fontana, abcd github repository, URL: https://github.com/ec-jrc/abcd/
LEGEND is the successor of the GERDA and the MAJORANA DEMONSTRATOR experiments searching for neutrinoless double beta decay with high-purity germanium detectors enriched in the isotope $^{76}$Ge. Its first phase, currently under commissioning at Laboratori Nazionali del Gran Sasso, will reach a half-life sensitivity of ~10$^{27}$ yr to this lepton-number violating process by employing 200 kg of Ge crystals. A later phase, with 1000 kg of enriched detectors, will extend the sensitivity to beyond 10$^{28}$ yr. In this poster presentation, the details of the calibration of the LEGEND-200 experiment are presented. Radioactive $^{228}$Th sources will be deployed into the liquid-argon cryostat regularly with a calibration system; positioning the sources next to the detector array with a vertical precision of a few mm. Known $\gamma$-ray energies are used to calibrate the energy scale of the detectors, to measure their energy resolutions, and to monitor the stability of these parameters.
Many analyses in ATLAS rely on the identification of jets containing b-hadrons
(b-jets). The corresponding algorithms are referred to as b-taggers. A deep
neural network based b-tagger, DL1r, has been widely used in ATLAS Run 2
physics analyses. Its performance needs to be measured in data to correct the
simulation. In particular, the measurement of the mis-tag rate for light jets is
extremely challenging given the very powerful light jet rejection of DL1r.
Therefore, the so-called "negative tag method" was developed which relies on a
modified tagger, designed to decrease the b-jet efficiency while retaining the
same light jet response. This work presents the recently published light jet
mis-tag rate measurement in Z + jets events using 139 fb^-1 of data from pp collisions at sqrt{s} = 13 TeV, collected with the ATLAS detector. The
precision is greatly improved compared to the previous iteration thanks to
improved inner detector modeling and more sophisticated systematic uncertainty
evaluations. This work has been widely applied in ATLAS Run 2 physics analyses.
The Jiangmen Underground Neutrino Observatory (JUNO) central detector (CD) will be the worldโs largest liquid scintillator (LS) detector to probe multiple physics goals, including determining neutrino mass ordering, measuring solar neutrino, detecting supernova neutrino, etc. With an unprecedented $3\%$ effective energy resolution and an energy nonlinearity better than 1% requirement to determine neutrino mass ordering, the multi-dimensional calibration system, including Auto Calibration Unit (ACU), Cable Loop System (CLS), Guide Tube Calibration System (GTCS), and Remotely Operated Vehicle (ROV), is designed with deploying multiple radioactive sources in various locations inside/outside of the CD. The strategy of the JUNO calibration system has been optimized based on Monte Carlo simulations from calibration sub-systems data. This poster will present the JUNO calibration system hardware design, calibration strategy and simulation results.
The Jiangmen Underground Neutrino Observatory (JUNO) is a neutrino medium baseline experiment under construction in southern China, expecting to begin data taking in 2023. The experiment has been proposed with the main goals of determining the neutrino mass ordering and measuring three oscillation parameters with sub-percent precision. To reach these goals, JUNO is located about 53$\,$km from two nuclear power plants and will detect electron antineutrinos from reactors through inverse beta decay. Furthermore, an unprecedented energy resolution of 34$\,$% at 1$\,$MeV is required. The JUNO detector consists of 20$\,$kt of liquid scintillator contained in a 35.4$\,$m diameter acrylic vessel, which is instrumented with a system of about 18000 20-inch Large-PMTs and 25600 3-inch small-PMTs, with a total photocoverage greater than 75$\,$%.
The signal from the Large-PMTs is processed by the JUNO electronics system, which can be divided into two main parts: the front-end electronics, placed underwater, consisting of a Global Central Unit (GCU); and the back-end electronics, outside water, consisting of DAQ and trigger. Each GCU reads three Large-PMTs and has the main tasks of performing the analog-to-digital conversion of the signals, generating a local trigger to be sent to the global trigger, reconstructing the charge, tagging events with a timestamp, and temporarily storing data in the local FPGA memory before transferring it to DAQ upon a global trigger request.
This contribution will focus on the description of the underwater electronics for the Large-PMTs. Results from tests on a small setup with 13 GCUs at Laboratori Nazionali di Legnaro, Italy, as well as from the integration test with 300 GCUs in China, will be presented.
Charged-particle pseudorapidity measurements help in understanding particle production mechanisms in high-energy hadronic collisions, from proton-proton to heavy-ion systems. Performing such measurements at forward rapidity, in particular, allows one to access the details of the phenomena associated with particle production in the fragmentation region. In ALICE, this measurement will be performed in LHC Run 3 exploiting the Muon Forward Tracker (MFT), a newly installed detector extending the inner tracking pseudorapidity coverage of ALICE in the range $-3.6<\eta<-2.5$.
The performance of the pseudorapidity density measurement in the forward region with the ALICE MFT will be presented for the pilot beam data taking of October 2021 for proton-proton collisions at $\sqrt{s}$ = 900 GeV. The MFT detector behaviour and response will also be compared to Monte Carlo simulations and raw data.
It is well-established that high-multiplicity pp and pโPb collisions exhibit a collective-like behaviour and signatures, like the strangeness enhancement and the ridge behaviors, that were commonly attributed to the formation of the Quark-Gluon Plasma. In this contribution, we investigate the possible similarities between pp, pโA and AโA collisions by studying the charged-particle production as a function of the underlying event classifier ($R_{\rm T}$). We perform a comprehensive study of the $R_{\rm T}$ dependence of charged-particle production in the momentum range of $0.5
ALICE is the experiment at the LHC specifically designed to study the properties of the quark-gluon plasma, a deconfined state of matter created in ultrarelativistic heavy-ion collisions. During LHC Run 1 and Run 2, ALICE recorded data in several collision systems and different centre-of-mass energies. In this context, the study of charged-particle production as a function of multiplicity play a key role in understanding the properties of the matter created in small (pp, p-Pb) and large (A-A) systems giving a unique opportunity to test the evolution of the spectral shapes as a function of the system size and energy. In this contribution, final studies of charged-particle production in pp, p-Pb and A-A collisions will be presented using a new approach that allows measuring the spectral properties in continuous, high-granular multiplicity bins by minimizing detector resolution effects thanks to a two-dimensional unfolding. The results will then be tested against the main theoretical models implemented in commonly-used Monte Carlo event generators.
A densely connected feed-forward neural network is capable to classify poles of scattering amplitude if fed with experimentally measured values of energy-dependent production intensity. As shown in [1], such a neural network trained with synthetic intensities based on effective range approximated amplitudes classifies the $P_c(4312)$ signal as a virtual state located at the 4th Riemann sheet in momentum space with very high certainty. This is in line with the results of other analyses but surpasses them by providing the simultaneous evaluation of probabilities of competing scenarios, like eg. the interpretation as a bound state. The machine learning approach also allows for identifying the energy bins which are key for the physical interpretation.
Here we discuss the extended approach, where the neural network is used to classify the physical nature of not just one state but the whole class of states described by the coupled channel amplitudes dominated by a pole lying close to the threshold, like $ฮฃ_c^+\bar{D}^0$ threshold for $P_c(4312)$ or $K\bar{K}$ threshold for $a_0(980)$ or $f_0(980)$ resonances. Apart from the fundamental significance for interpreting the nature of various hadron candidates, our approach has a practical application for experimental analyses by providing the means of rapid classification of potentially exotic states.
Bibliography
1. Deep Learning Exotic Hadrons, JPAC Collaboration โข L. Ng (Florida State U.) et al., e-Print: 2110.13742 [hep-ph]
Antideuteron and antihelium nuclei have been proposed as promising channels for dark matter particle detection. In fact, a possible DM production of antinuclei, assuming DM is made of WIMPs annihilating or decaying in the Galaxy, is at least one order of magnitude larger, at energies between 0.1-1 GeV/nucleons, than the astrophysical background coming from interactions of primary cosmic rays with interstellar matter. The estimate of the flux of dark matter and secondary cosmic ray antinuclei is crucial to interpret results of indirect dark matter searches carried out with space-based experiments like AMS and GAPS. In the laboratory, light antinuclei can be produced in high-energy interactions at colliders. To model their production in hadronic interactions, a coalescence model can be employed on an event-by-event basis within Monte Carlo frameworks. According to the coalescence approach, a nuclear cluster is formed when two or more nucleons are close in phase space. The process depends on the momentum distribution of the nucleons, the nucleus wave funnction, the characteristics of the nucleon emitting source. Here, we propose a coalescence afterburner for antinuclei production in high energy hadronic collisions from Super Proton Synchrotron (SPS) to Large Hadron Collider (LHC) energies using PYTHIA8 event generator as input. In this work, PYTHIA8 has been tuned to describe proton and antiproton yields and energy distributions as measured in pp collisions at SPS and LHC energies for the first time. Our approach employs a state-of-the-art Wigner function-based coalescence model and explores different wave functions for antinuclei. The results from the afterburner are compared with experimental results from NA49 and ALICE, and prospects for applications of the present model in heavy-ion collisions are discussed.
In this paper, we accomplish the complete one-loop matching of the type-I seesaw model onto the Standard Model Effective Field Theory (SMEFT), by integrating out three heavy Majorana neutrinos with the functional approach. It turns out that only 31 dimensionsix operators (barring flavor structures and Hermitian conjugates) in the Warsaw basis of the SMEFT can be obtained, and most of them appear at the one-loop level. The Wilson coefficients of these 31 dimension-six operators are computed up to $\mathcal{O}(M^{-2})$ with $M$ being the mass scale of heavy Majorana neutrinos. As the effects of heavy Majorana neutrinos are encoded in the Wilson coefficients of these higher-dimensional operators, a complete one-loop matching is useful to explore the low-energy phenomenological consequences of the type-I seesaw model. In addition, the threshold corrections to the couplings in the Standard Model and to the coefficient of the dimension-five operator are also discussed. The one-loop matching results of the type-II seesaw model are also briefly discussed.
Based on
- D. Zhang and S. Zhou, Complete one-loop matching of the type-I seesaw model onto the Standard Model effective field theory, JHEP 09 (2021), 163 [arXiv:2107.12133 [hep-ph]].
- X. Li, D.Zhang and S.Zhou, One-loop Matching of the Type-II Seesaw Model onto the Standard Model Effective Field Theory,[arXiv:2201.05082 [hep-ph]].
T2K (Tokai to Kamioka) is a long-baseline neutrino oscillation experiment
located in Japan. One of the most challenging tasks of T2K is to determine
whether CP is violated in the lepton sector. By utilizing the near detector
(ND280) data, T2K can constrain neutrino interaction and flux uncertainties
by fitting a parametrised model to data. This allows for a significant reduction
of the systematic uncertainties in neutrino oscillation analyses. This year T2K
oscillation analysis include a number of improvements to the cross-section model including: expanded treatment of shell structure in Spectral Function model, 2p2h
pair uncertainties, updated removal energy and nucleon FSI. Flux systematic have also been updated using the NA61/SHINE 2010 replica target data.
To better constrain the new model, the near detector fit has introduced new samples using
proton as well as photon tags, in additions to the muon and pion information. T2K uses two different methods to constrain flux and cross
section at ND280, one of which uses Markov Chain Monte Carlo and will be
discussed in this poster. The poster includes posterior distributions for selected
cross-section parameters, impact of new samples as well as prior and posterior
predictive distribution for chosen samples. These results are part of the recent
T2K oscillation analysis.
The $U(1)_{L_\mu-L_\tau}$ model is one of the simplest anomaly free models to feature a new gauge boson $Z'$ by extending the Minimal Standard Model (MSM) group $G_{\text{MSM}}\equiv SU(3)_{\text{QCD}}\otimes SU(2)_{\text{Weak}}\otimes U(1)_Y\rightarrow G_{\text{MSM}}\otimes U(1)_{L_\mu-L_\tau}$. This hypothetical new gauge boson $Z'$ could affect the cooling mechanism of a core-collapse supernova. The production of $Z'$ in a supernova might over contribute to the energy loss depending on the magnitude of the coupling between the new gauge boson $Z'$ and $\mu, \tau$ leptons. Consequently, the SN neutrino production might be affected and contradict the recent core-collapse supernova neutrino observation, SN 1987A. We calculate the $Z'$ production and absorption/decay rates through pair-coalescence, semi-Compton, loop-Bremsstrahlung from proton-neutron scattering, and their inverse processes in a benchmark SN simulation SFHo18.8 (Thomas Janka {\it et al.}, Phys. Rev. Lett. 125, 051104 (2020)) and put constraints on the coupling constant in this new gauged $U(1)_{L_\mu-L_\tau}$ model. Although such constraints were studied in previous literature, our study gives more stringent constraints on the model by carefully considering the competition between $Z'$ production and absorption/decay effects to $Z'$ luminosity at the very outermost shell of the neutrino sphere. We point out that $Z'$ luminosity will tend to a constant plateau value depending on $m_{Z'}$ instead of monotonically decreasing down to zero as the coupling constant increases. This plateau phenomenon can be understood by physical arguments and justified by numerical calculations. We found that the plateau value of $Z'$ luminosity will become greater than Raffelt's criterion when $m_{Z'}$ is lower than a specific value $\sim 2$ eV. For $m_{Z'}< 2$ eV, the so-called trapping limit shall disappear completely. We stress that the plateau behavior of $Z'$ luminosity in the large coupling limit should also occur for other BSM models that introduce new light bosons. Hence our work has extended applications.
Constraints on Higgs boson inclusive production with transverse momentum above 1 TeV are reported for the first time by the ATLAS Collaboration. This kinematic region is not yet well constrained by Higgs boson measurements and is sensitive to new physics effects, as predicted in some Beyond Standard Model scenarios. The analysed data were recorded from proton-proton collisions at a centre-of-mass energy of 13 TeV with the ATLAS detector at the Large Hadron Collider from 2015 to 2018 and correspond to an integrated luminosity of 136 $\rm fb^{-1}$. Higgs bosons decaying into $b\bar{b}$ are reconstructed as single large-radius jets recoiling against a hadronic system and identified by the experimental signature of two b-hadron decays. The experimental techniques are validated in the same kinematic regime using the $Z\rightarrow b\bar{b}$ process. The 95% confidence-level upper limit on the cross section for Higgs boson production with transverse momentum above 450 GeV is 115 fb, and above 1 TeV it is 9.6 fb. The Standard Model predictions in the same kinematic regions are 18.4 fb and 0.13 fb, respectively.
Lepton-flavor-violating (LFV) scalar portal is an interesting mechanism that connects the dark sector to the visible one. This mechanism leads to a rich phenomenology including an extra contribution to muon anomalous magnetic moment desirable for alleviating the discrepancy between the updated SM prediction and the combined results of Fermilab and BNL measurements. With the low-energy effective coupling ${{\cal L}_{\phi \mu e}} = - {y_{\mu e}}\left( {{{\bar e}_L}{\mu _R}\phi + {{\bar \mu }_R}{e_L}{\phi ^*}} \right)$, which turns muon into electron or vice versa through the scalar $\phi$, we derive the $(y_{\mu e},m_{\phi})$ parameter space that could account for the discrepancy mentioned above. Furthermore, we calculate the cross section $e^+e^-\to e^-\mu^+\phi^*, e^+\mu^-\phi$ induced by ${{\cal L}_{\phi \mu e}} $ and SM vertices. Using Belle II model-independent $90\%$ C.L. upper limit on $\varepsilon\, ({\rm efficiency})\times \sigma(e^+e^-\to {e^ \pm }{\mu ^ \mp }+{\rm{invisible}})$ with ${\cal L} = 276 \, {\rm pb}^{-1}$ (Phys. Rev. Lett. ${\bf 124}$, 141801 (2020)), we obtain the corresponding upper limit for $y_{\mu e}\times\sqrt{\varepsilon\cdot {\rm Br}(\phi\to {\rm invisible})}$. For $\varepsilon=1\%$ and ${\rm Br}(\phi\to {\rm invisible})=1$, we found that for $m_{\phi}< 4$ GeV, the $90\%$ C.L. upper limit for $y_{\mu e}$ is already in the favorable parameter range to account for the measured $g_{\mu}-2$.
We stress that explicit details of scalar portal models would determine ${\rm Br}(\phi\to {\rm invisible})$ while the efficiency factor $\varepsilon$ requires a detailed experimental analysis. Here we meant to point out that the search for ${e^ + }{e^ - } \to {e^ \pm }{\mu ^ \mp }+{\rm{invisible}}$ could yield very interesting constraints on LFV scalar portal models. Hence a model-dependent experimental analysis is also very worthwhile.
Cosmic Muon Images[1] is a citizen science project from the domain of muon tomography (muography) with the goal to use machine learning and exploratory data analysis to improve the discrimination between particle detector signal and the different kinds of background. It is one of the four citizen science demonstrators developed within the EU-funded (GA-872859) REINFORCE project[2] (Research Infrastructures FOR Citizens in Europe). It uses Zooniverse[3] platform to provide volunteers with images of data registered by particle detectors during muography experiments. These images are 3D and 1D representations of the charge deposits of particles on the detector scintillation planes during the registration of an event. Zooniverse provides the space to host the project as well as the tools for the processing of the data by the volunteers. With the guidance and support of Zooniverse staff we created a series of materials that address the scientific background of muography, explain the objectives of the project and guide the volunteers through the two workflows created for the processing of the data. Effort is also put towards the sonification of the data coordinated by the team behind SonoUno[4] sonorization software to make the project more inclusive towards users with different sensory styles that want to access and analyze our dataset. The demonstrator is now online since Jan. 11, 2022, within this period of three months we have seen more than 500 volunteers create more than 45,000 classifications while a series of talks and events is also planned to increase the engagement of our participants. These events will familiarize new people with our project while at the same time refresh the interest of our current volunteers by providing new insights on muon tomography objectives and trigger discussions on new detector technologies and future muography expeditions. A great effort is made towards the inclusion of school students through a series of schools and seminars co-organized together with other EU-funded projects (e.g. FRONTIERS Summer School 2021) since young people have much to gain from learning about interdisciplinarity between sciences and how scientists from different domains collaborate towards a common goal.
[1] https://www.zooniverse.org/projects/reinforce/cosmic-muon-images
[2] https://www.reinforceeu.eu/
[3] https://www.zooniverse.org/
[4]http://sion.frm.utn.edu.ar/sonoUno/
In high energy physics, often effective field theories (EFTs) are used to parameterise the possible ways in which new physics at some high-energy interaction scale $\Lambda_\mathrm{EFT}$ may indirectly modify differential cross sections or branching fractions. To constrain the EFT parameter space, profile likelihood ratio (PLR) are used to perform frequentist hypothesis tests and calculate confidence levels of Wilson coefficients. Key to this is to know the expected distribution of the PLR under all reasonable parameter hypotheses. A common practice is to assume that the PRL follows an asymptotic form following Wilks' theorem. However, this approach is not always correct and in fact the asymptotic distribution of the PLR does not follow $\chi^2$ distribution when the EFT parameterisation is not dominated by a linear Wilson coefficient. In this presentation, we explain when and why the PLR may not be assumed to follow a $\chi^2$ distribution. We provide the correct asymptotic distribution for an EFT fit dominated by a quadratic parameter dependence and discuss the generalisation to cases with (i) significant linear and quadratic dependencies and (ii) multiple parameters.
In this talk, we systematically study the algebraic structure of the ring of the flavor invariants and the sources of CP violation in the seesaw effective field theory (SEFT), which includes the dimension-five Weinberg operator and one dimension-six operator at the tree-level matching. For the first time, we calculate the Hilbert series and explicitly construct all the primary flavor invariants in the SEFT. We show that all the physical parameters can be extracted using the primary invariants and any CP-violating observable can be expressed as the linear combination of CP-odd flavor invariants. The calculation of the Hilbert series shows that there is an equal number of primary flavor invariants in the SEFT and in the full seesaw model, which reveals the intimate connection between the flavor space of the SEFT and that of its ultraviolet theory. A proper matching procedure of the flavor invariants is accomplished between the SEFT and the full seesaw model, through which one can establish a direct link between the CP asymmetries in leptogenesis and those in low-energy neutrino oscillations.
Cryogenic detectors have reached low-threshold and high energy resolution, making them useful tools to detect sub-keV nuclear recoils induced by Coherent Elastic Neutrino-Nucleus Scattering, or interactions with light Dark Matter. However, these detectors lack calibration for nuclear recoils at this energy scale. The CRAB method proposes to use nuclear recoils produced by gamma de-excitation after thermal neutron capture in the cryogenic detector to provide calibration peaks in the region of interest. In particular, single-gamma transitions of several MeV induce well-defined nuclear recoil peaks in the 100eV-1keV range. CRAB is so far the only calibration method offering pure nuclear recoils in the bulk of the detector, and at this energy range.
Combining GEANT4 Monte-Carlo simulations, and gamma de-excitation predictions from the FIFRELIN code, we have studied the expected energy spectrum in various cryogenic detectors widely used in the community. Currently in the R&D phase, the CRAB project intends on calibrating cryogenic detectors near the low power TRIGA reactor in Vienna. Simulations show that $\text{CaWO}_4$ is a material with two nuclear recoil peaks at 112.5eV and 160.3eV that should stand out well above the multi-gamma recoil continuum. Detecting the emitted gamma in coincidence with the subsequent nuclear recoil in the cryogenic detector is expected to increase the sensitivity of the CRAB method, extending its application to other materials, such as germanium or silicon, and possibly to lower recoil energies. Latest simulation results and experimental strategy will be discussed.
We present the current status of the theory of resummed quantum gravity. We focus on its prediction for the cosmological constant in the context of the Planck scale cosmology of Bonanno and Reuter and its relationship to Weinberg's asymptotic safety idea. We discuss its relationship to Weinberg's soft graviton resummation theorem. We also present constraints from and consistency checks of the theory as well as a possible new test of the theory.
Production measurements of heavy quarks in pp collisions provide a stringent test to pQCD calculations. Analysing their production as a function of charged particle multiplicity allows us to study multi-parton interactions, which are expected to have a relevant role in charged particle production at high energy at the LHC. Moreover, the comparison with theoretical models allows investigating the contribution of colour reconnection in hadronization mechanisms of heavy flavours.
In this poster, the average D-meson (D$^0$, D$^+$, D*$^+$) production measurements as a function of multiplicity in pp collisions at $\sqrt{s}$ = 13 TeV will be presented. The results will be compared with similar studies for other particle species, with D-meson measurements performed at $\sqrt{s}$ = 7 TeV, and with results from Monte Carlo event generators.
Dark matter (DM) particles are predicted to decay into Standard Model particles which would produce signals of neutrinos, gamma-rays, and other secondary particles. Neutrinos provide an avenue to probe astrophysical sources of DM particles. We review the decay of dark matter into neutrinos over a range of dark matter masses from MeV/c2 to ZeV/c2. We examine the expected contributions to the neutrino flux at current and upcoming neutrino and gamma-ray experiments, such as Hyper-Kamiokande, DUNE, CTA, TAMBO, and IceCube Gen-2. We consider galactic and extragalactic signals of decay processes into neutrino pairs, yielding constraints on the dark matter decay lifetime that ranges from tau โผ 1.2ร10^21 s at 10 MeV/c2 to 1.5x10^29 s at 1 PeV/c2.
The instantaneous luminosity of the Large Hadron Collider (LHC) at CERN will be increased up to a factor of seven with respect to the original design value to explore higher energy scale. In order to benefit from the expected high luminosity performance, the ATLAS Muon System was upgraded with its first station end-cap Small Wheel system replaced by a New Small Wheel (NSW) detector. The Muons System and the NSW provide precise track segment information to the ATLAS Level-1 trigger for data recording.\'a0 Before being certified for permanent storage, the data must be scrutinized to ensure the integrity of the detector. Prompt identification of any issues calls for near-line fast action to investigate, correct and potentially prevent problems that could render the data unusable for physics analyses. This is achieved through the monitoring of detector-level quantities and reconstructed collision event characteristics at key stages of the data processing chain. This presentation will present the monitoring and assessment procedures in place at ATLAS for data-taking in 2022 for Run3 with the Muons System. In the last two years the ATLAS experiment has indeed commissioned an upgraded its full data-flow quality monitoring system for online hardware detector status survey and quick assessment of the running conditions. The main technology development and status of the Run3 Muon System commissioning system with GNAM, as the online monitoring structure developed to oversee the data-taking of the ATLAS detectors, will be summarized with preliminary results for early detector operation. The deployment of new NSW Data-Quality (DQ) software and the practical operations arrangements, as well as key technical implementation aspects, will be outlined. This DQ monitoring tool allows great flexibility for visualization of histograms, with an overlay of reference histograms when applicable and configuration for automatic checking of the status of the detectors as data is being recorded. The online DQ also use data provided at the express-stream which are reconstructed with the Athena platform. This contribution will therefore summarize the progress in the ATLAS Muon System commissioning, present the data quality monitoring and certification systems in place from online data taking to delivering certified data sets for release validation and offline reconstruction for future physics analyses.
The Daya Bay reactor neutrino experiment provided the first non-zero measurement of the neutrino mixing angle $\theta_{13}$ with more than 5 $\sigma$ significance using a sample of antineutrinos identified via neutron capture on gadolinium (nGd) in 2012. In 2014 and 2016, the DayaBay experiment reported independent rate-only measurements of $\theta_{13}$, utilizing sample of events identified using neutron capture on hydrogen (nH), which has distinct systematics to those of the nGd analysis. In this poster, we shall show the latest nH analysis result with 3 time larger statistics and that uses both the rate deficit and spectral distortion information. By improved understanding of detector energy response, we will show the latest result of $\theta_{13}$.
Abstract: With eight identically designed underground detectors deployed at different baselines from six 2.9 GWth nuclear reactor cores, the Daya Bay Reactor Neutrino Experiment has achieved unprecedented precision in measuring the neutrino mixing angle $\theta_{13}$ and the neutrino mass-squared difference $\Delta m^2_{32}$ through the inverse beta decay (IBD) reaction with the final state neutron captured on gadolinium (nGd). The near-far relative measurement is the key to reduce the reactor and detector related systematic uncertainties. The experiment has stopped data taking with ~6 million antineutrino events collected from December 2011 to December 2022. The latest oscillation analysis results with the full dataset and improved systematic uncertainties will be presented in this poster.
To cope with the high event pile-up, the liquid argon time projection chamber of the near detector complex of the Deep Underground Neutrino Experiment, called ND-LAr, relies on an innovative modular design featuring an advanced high-coverage photon detection system, a true 3D pixelated charge readout, and a low-profile resistive-shell field cage. The capabilities of this detector, including the performance of the charge and light readout systems, the signal matching between the two, the detector purity, and the response uniformity, have been demonstrated with two ton-scale prototypes operated at the University of Bern that acquired large samples of cosmic ray data. The data have been compared to a microphysical detector simulation performed with highly-parallelized GPU algorithms. The main results from the analysis of these data sets, as well as the overall status of the ND-LAr detector development efforts, are presented in this talk.
The high-luminosity upgrade to the LHC (HL-LHC) leads to considerable challenges for the ATLAS detector, including greater radiation exposure to the on-detector electronics and increased pileup from low momentum collisions affecting trigger selection performance. The ATLAS Tile Calorimeter (TileCal) is a hadronic sampling calorimeter made of steel tiles as absorber and scintillating plastic tiles as active medium. The light produced by the tiles is read out by photomultiplier tubes (PMTs). The PMT signals are shaped, conditioned, and then digitized every 25 ns before being sent off-detector. A complete replacement of the on- and off-detector electronics for TileCal will take place in preparation for the HL-LHC program in 2026. The new system is designed to digitize and transmit all sampled calorimeter data to the off-detector systems, where the data are stored in latency pipelines. Quasi-projective digital trigger tower sums are formed and forwarded to the level-1 trigger. The TileCal upgrade program has included extensive R&D and test beam campaigns. The new design includes state-of-the-art electronics with extensive use of redundancy and radiation-tolerant electronic components to avoid single points of failure. Multi-Gbps optic links drive the high volume of data transmission, and Field Programmable Gate Arrays (FPGAs) provide digital functionality both on- and off detector.
A hybrid demonstrator prototype module, instrumented with new module electronics and interfaces for backward compatibility with the present system, was assembled and inserted in ATLAS in June 2019 to gain experience in actual detector conditions. We present the current status and test results from the Phase-II upgrade demonstrator module running in ATLAS.
The China Jinping Underground Neutrino Experiment~(CJPL) forsees completion of phase II construction around 2025. A hundred-ton liquid solar neutrino detector, Jinping Neutrino Experiment~(JNE), will be built 1 year after that.
We are going to review the status and plans of the project, including construction of the experiment site, design of the detector, instrumentation of the fast frontend electronics, characterization of photomultiplier tubes and system of offline data processing. We shall discuss physics potentials with different interchangeable detection media with JNE.
KNU Advanced Positronium Annihilation Experiment (KAPAE) aims to detect visible rare positronium decay, search for C, CP, and CPT violations as well as search for invisible decay. The KAPAE Phase II is designed to increase the sensitivity of the invisible decay of the positronium such as milli-charged particles, mirror world, new light X-boson, and extra dimensions. Compared to KAPAE phase I, the detector is less segmented, the trigger of the detector is changed to reduce the dead area, and the size of the BGO scintillation crystal and the size of the overall detector is increased. The KAPAE phase II detector consists of BGO scintillation crystals stacked in a 10 by 10 array coupled with SiPM arrays. We show the design of the KAPAE phase II detector and the sensitivity of the invisible decay of the detector using the Geant4 Monte-Carlo simulation. Furthermore, we will report the performance of the detector optimization.
The ATLAS tracking system will be replaced by an all-silicon detector for the HL-LHC upgrade.
The innermost tracking system will consist of 5 barrel layers and several end-cap disks, equipped with pixel modules. The pixel detector will operate in most challenging environment, which imposes unprecedent requirements on the radiation hardness and readout speed. A serial power scheme will be used for the pixel detector resulting in the reduction of the radiation length and power consumption in cables. Moving from the current parallel powering scheme of the detector to the serial powering scheme requires the development of new detector control system, constant current sources, and new front-end electronics with shunt regulators. Prototypes of these elements are built to prove the concept; multiple system-level tests are done with serial powering of pixel modules. The evaluation of both the readout of multiple modules in series and their mechanical integration are further steps in the prototyping program. In this contribution, we present results of recent readout tests of modules powered in series as well as procedures developed for the integration process.
The dual-readout method is a state of the art calorimetry technique enables outstanding energy resolutions for both electromagnetic and hadronic particles, which has been developed during last two decades. The dual-readout calorimeter detector has been included in the conceptual design reports of both FCC-ee and CEPC projects published in 2018. As a next step, the dual-readout calorimeter R&D team is building a prototype detector with various R&D points and demonstrate all necessary requirements of the detector toward TDRs of the future e+e- collider projects. This presentation reports the recent progress for the dual-readout calorimeter R&D such as prototype detector building, hardware R&D, software development, and simulation studies.
Signs of turbulence have been observed at the relativistic heavy-ion collision at high collision energies. We study the signatures of turbulence in this system and find that there are significant departures from isotropic turbulence in the initial and the pre-equilibrium stages of the collision. As the anisotropic fluctuations are subleading to the isotropic fluctuations, the Kolmogorov spectrum can usually be obtained even for the initial stages. However, the energy spectrum and the temperature fluctuations indicate deviations from isotropic turbulence. Since a strong momentum anisotropy exists between the transverse and the longitudinal plane, we study the energy density spectrum in these two planes by slicing the sphere into different planes. The geometrical anisotropy is reflected in the anisotropic turbulence generated in the rotating plasma and we find that the scaling exponent is different in the two planes. We also obtain the temperature spectrum in the pre-equilibrium stages. The spectrum deviates from the Gaussian spectra expected for an isotropic turbulence. All these seem to indicate that the large-scale momentum anisotropy persists in the smaller length scales for the relativistic heavy-ion collisions.
The Jiangmen Underground Neutrino Observatory (JUNO) is a next-generation large liquid-scintillator neutrino detector. Its main goal is the determination of neutrino mass ordering, one of the most crucial open questions for neutrinos. To enhance its sensitivity to the mass ordering, JUNO will combine the measurements of reactor anti-neutrinos at low energies with those of atmospheric neutrinos at high energies (GeV level). The sensitivity from the atmospheric neutrino measurement significantly depends on the angular resolution of the incident neutrino.
This poster presents the direction reconstruction of atmospheric neutrinos with the machine learning method. In this method, multiple features extracted from tens of thousands of PMT waveforms are utilized to characterize the direction properties of atmospheric neutrinos. And two independent machine learning models, including a deep convolutional neural network and a spherical graph neural network, are used to perform the reconstruction. Preliminary results based on full Monte Carlo simulation show great potential for the high-precision reconstruction of the neutrino direction.
The Recoil Directionality project (ReD) within the Global Argon Dark Matter Collaboration aims to characterize the light and charge response of liquid argon (LAr) dual-phase Time Projection Chamber (TPC) to neutron-induced nuclear recoils. The main goal of the project is to probe for the possible directional dependence suggested by the SCENE experiment. Furthermore, ReD is also designed to study the response of a LAr TPC to very low-energy nuclear recoils. Sensitivity to directionality and low-energy recoils are both key assets for future argon-based experiments looking for Dark Matter in the form of WIMPs. Furthermore, the ReD TPC uses all the innovative features of the design of the DarkSide-20k experiment: in particular the optoelectronic readout based on SiPM and the cryogenic electronics. It is thus a valuable test bench of the technology which is being developed for DarkSide-20k and the future project Argo.
The first measurement of ReD consisted of the irradiation of a miniaturized LAr TPC with a neutron beam at the INFN, Laboratori Nazionali del Sud (LNS), Catania. The correlation of the ionisation and scintillation signals, which is a possible handle to measure the recoil direction of nuclei was studied in detail for 70 keV nuclear recoils, using a neutron beam produced via the reaction p(7Li,7Be)n from a primary 7Li beam delivered by the TANDEM accelerator of LNS. A model based on directional modulation in charge recombination was developed to describe the correlation. In addition, a dedicated measurement tailored to characterize the response of the TPC to very low-energy nuclear recoils (< 10 keV) is being currently performed at INFN Sezione di Catania, using neutrons produced by an intense Cf252 fission source.
In this contribution, we describe the experimental setup, the theoretical model, and the preliminary results from data analysis.
We explore the ability of a recently proposed jet substructure technique, Dynamical Grooming, to pin down the properties of the Quark-Gluon Plasma formed in ultra-relativistic heavy-ion collisions. In particular, we compute, both analytically and via Monte-Carlo simulations, the opening angle $\theta_g$ of the hardest splitting in the jet as defined by Dynamical Grooming. Our calculation, grounded in perturbative QCD, accounts for the factorization in time between vacuum-like and medium-induced processes in the double logarithmic approximation. We observe that the dominating scale in the $\theta_g$-distribution is the decoherence angle $\theta_c$ which characterises the resolution power of the medium to propagating color probes. This feature also persists in strong coupling models for jet quenching. We further propose for potential experimental measurements a suitable combination of the Dynamical Grooming condition and the jet radius that leads to a pQCD dominated observable with a very small sensitivity (โค10%) to medium response.
Resistive Plate Chambers are operated in several experiments tipically with large fractions of Tetrafluoroethane (C2H2F4) commonly known as R134a, a gas with a high Global Warming Potential (GWP) that has been recently banned by the European Union.
Within the HEP Community, many studies are ongoing to find a good replacement for such component for RPCs working in avalanche mode. One interesting alternative is the Tetrafluoropropene (C3H2F4) called HFO1234ze with a GWP of 6 that has been shown to have reasonable performance with respect to the R134a.
Since a few years a joint collaboration between ALICE, ATLAS, CMS, LHCb/SHiP and CERN groups is in place with the goal to study the performance of RPCs operated with eco-friendly gas mixtures under irradiation at GIF++.
The performance of several chambers with different layout and electronics has been studied during dedicated beam tests, with and without gamma irradiation at GIF++. The RPCs have been operated with different gas mixtures based on CO2 and HFO1234ze gases. Results of these tests together with the future plans for aging studies of the chambers will be presented.
Coherent elastic neutrino-nucleus scattering (CEฮฝNS) is a new tool for examining the Standard Model and searching neutrino electromagnetic properties, which can be a manifestation of new physics [1]. We study the electromagnetic contribution to elastic neutrino-nucleon and neutrino-nucleus scattering processes. Following our approach developed for the case of elastic neutrino-electron [2] and neutrino-nucleon [3,4] collisions, in our formalism we account for the electromagnetic form factors of massive neutrinos: the charge, magnetic, electric, and anapole form factors of both diagonal and transition types. When treating the nucleon electromagnetic vertex, we take into account not only charge and magnetic form factors of a nucleon, but also its electric and anapole form factors. We inspect how the effects of the neutrino electromagnetic properties (in particular, millicharge, charge radii and magnetic moments) can be disentangled from those of the strange quark contributions to the nucleonโs weak neutral current form factors. We also study how the neutrino electromagnetic form factors can manifest themselves in coherent elastic neutrino scattering on nuclear targets. We apply our formalism to the case of the (_ ^40)Ar nucleus with neutrino energies typical for the CEฮฝNS experiments.
[1] C. Giunti, A. Studenikin, Neutrino electromagnetic interactions: A window to new physics, Rev. Mod. Phys. 87, 531 (2015), arXiv:1403.6344.
[2] K. Kouzakov, A. Studenikin, Electromagnetic properties of massive neutrinos in low-energy elastic neutrino-electron scattering, Phys. Rev. D 96, 099904 (2017), arXiv:1703.00401.
[3] K. Kouzakov, F. Lazarev, A. Studenikin, Electromagnetic neutrino interactions in elastic neutrino-proton scattering, PoS (ICHEP2020) 205.
[4] K. Kouzakov, F. Lazarev, A. Studenikin, Electromagnetic effects in elastic neutrino scattering on nucleons, 2021 J. Phys.: Conf. Ser. 2156 012225
Neutrino scattering on atomic systems at low-energy transfer is a powerful tool for searching the neutrino electromagnetic interactions [1,2]. The regime of coherent elastic neutrino-atom scattering (CEฮฝAS), i.e., when the atom recoils as a pointlike particle, can be effectively fulfilled in the case of tritium neutrinos [3]. We present theoretical calculations for CEฮฝAS processes on such targets as the H, $^2$H, $^3$He, $^4$He, and $^6$C atoms. We show how the atomic effects and neutrino electromagnetic properties, namely the neutrino millicharge and magnetic moment, may manifest themselves in the atomic-recoil spectra. Our results can be used in planning the CEฮฝAS experiments (in particular, with superfluid $^4$He [3,4]).
References
[1] C. Giunti and A. Studenikin, Rev. Mod. Phys. 87, 531 (2015) [arXiv:1403.6344 [hep-ph]].
[2] K. Kouzakov and A. Studenikin, Phys. Rev. D 96, 099904 (2017) [arXiv:1703.00401 [hep-ph]].
[3] M. Cadeddu, F. Dordei, C. Giunti, K. Kouzakov, E. Picciau, and A. Studenikin, Phys. Rev. D 100, 073014 (2019) [arXiv:1907.03302 [hep-ph]].
[4] G. Donchenko, K. Kouzakov, and A. Studenikin, J. Phys.: Conf. Ser. 2156, 012231 (2021) [arXiv: 2111.03331 [hep-ph]].
We quantify the anomalous magnetic moment and electric dipole moment of the $\tau$-lepton through the process $e^{+}e^{-} \rightarrow \tau^+ \tau^-\gamma$, within the ranges of energies and luminosities affordable at the future International Linear Collider (ILC) and the Compact Linear Collider (CLIC). The tau-lepton is a key particle in various Beyond the Standard Model (BSM) models and is considered a laboratory for many experimental or simulation aspects in searches for new physics. In particular, the tau-lepton anomalous couplings to bosons in the $\tau^+ \tau^-\gamma$ and $\tau^+ \tau^- Z$ vertices, have made the tau-lepton one of the most attractive particles for new physics searches.
Understanding the reconstructed energy resolution of the electromagnetic (EM) activity in a liquid argon time projection chamber (LArTPC) is important for measurements of neutrino oscillations and searches for beyond standard model physics in the current and future neutrino experiments using the LArTPC technology. The high quality data taken in the ProtoDUNE single phase LArTPC are ideal for studying the energy resolution of EM objects. In this talk, we will present the excellent reconstructed energy resolutions for Michel electrons, neutral pions and beam electrons using ProtoDUNE data, covering a wide range of energies from a few MeV up to 7 GeV.
In July 2020, Super-Kamiokande has been upgraded by loading 13 tons of gadolinium(Gd) sulfate octa-hydrate as a new experimental phase โSK-Gdโ. Thermal neutron capture on Gd emits gamma-rays with a total energy of about 8 MeV so that we obtain higher neutron tagging efficiency in SK-Gd than in the pure-water phase. Therefore, an increase in the sensitivity of the search for the Supernova Relic Neutrino will be expected in the SK-Gd.
Accurate evaluation of neutron identification efficiency is essential for SK-Gd. In this presentation. For the estimation of efficiency, calibration using Am/Be neutron source was carried out. In this presentation, I report on the result of the estimation of the neutron detection efficiency and comparison with simulation.
We investigate the potential reach at the large hadron collider (LHC) of a search for a long-lived dark vector boson, also called a dark $Z$ or $Z_D$, through exotic decays of the standard-model Higgs boson $h$ into either $Z_DZ_D$ or $ZZ_D$. Besides, we investigate a decay of $h$ into two dark Higgs bosons $h_Dh_D$ with each $h_D$ decaying into a pair of $Z_D$'s. We consider the production of $h$ via gluon-gluon fusion (ggF) and use production cross sections from the literature for Runs 2 and 3 of the LHC, calculated to a combination of next-to-next-to-next-to-leading order with QCD corrections (N$^3$LO QCD) and next-to-leading order with electroweak corrections (NLO EW). The $Z_D$ production through the Higgs portal is completed via one of two mechanisms, kinetic mixing of $Z_D$ with the hypercharge boson and the mixing of $h_D$ with $h$. The branching fractions are calculated to NLO and scanned over the relevant mixing parameters and particle masses in Monte Carlo (MC) simulation using the MadGraph5_aMC@NLO v2.7.2 framework. Emphasis is given to a final state of dimuons, displaced up to 7500 mm, where the muons can be reconstructed without vertex constraint using data from the ATLAS and CMS detectors to be collected in Run 3 of the LHC. Integrated luminosities of 137, 300, and 3000 fb$^{-1}$ for Run 2, Run 3, and High Luminosity (HL) era, respectively, of the LHC are used for estimating the expected search sensitivity of the LHC to each decay mode. Finally, we investigate the kinematics of the displaced dimuons and the $Z_D$ decay lengths in the detectors.
To extend the potential of discoveries for new physics beyond the
Standard Model as well as precision measurements the High Luminosity (HL)
phase of the large hadron collider at CERN aims to deliver an integrated
luminosity of up to 4000 fb$^{-1}$. To face the challenging environment
associated with the high number of collisions per bunch crossing, the current
inner detector will be replaced with a new all-silicon Inner Tracker (ITk)
which will cover up to $|\eta| < 4$. This poster presents results of the
expected tracking performance as well as some representative high-level object
reconstruction and identification, including primary vertices, jet
flavour-tagging, electrons, and converted photons using an updated layout of
the ITk pixel detector.
Magnetic and electric dipole moments of fundamental particles provide powerful probes for physics within and beyond the Standard Model.For the case of short-lived particles, these have not been experimentally accessible to date due to the difficulties imposed by their short lifetimes. The R&D on bent crystals and the experimental techniques developed to enable such measurements are discussed. An experimental test at the insertion region IR3 of the LHC is under consideration as proof of principle of a future fixed-target experiment for the measurement of charm baryon dipole moments. The design of the experiment and the main goals of the test are presented.
DUNE is a future liquid argon TPC neutrino oscillation and astrophysical neutrino experiment that will take data at a rate of 30 PB/year. Prototypes running at CERN have already taken data and collaborators are currently analyzing 1 PB of data and 5-6 PB of simulation from the first prototype run using the resources of 48 DUNE institutions.
The DUNE computing system has evolved from the heritage of neutrino and collider experiments based at Fermilab. To achieve the increase in scale required by DUNE it has been necessary to generalise the computing systems to make better use of resources elsewhere in the world, first at CERN and then at institutes in other countries. The integration of UK computing resources into DUNE is an informative use-case of this process.
We describe how DUNE computing in the UK transitioned from ad-hoc support by a few institutes to becoming fully integrated in the UK infrastructure alongside the LHC experiments. This infrastructure is operated by GridPP as part of WLCG and has a mature operations culture, keeping staff at sites and experiments in regular contact. This led to increased use of WLCG-favoured tools like GGUS tickets within DUNE as its own computing operations team grew. DUNE's expansion in the UK also coincided with the UK IRIS project and the start of its formal allocation process for resources provided to non-LHC physics and astronomy projects. This in turn contributed to a more formal description of DUNE's current and projected requirements in the immediate future.
Experience with the constraints and features of sites in the UK has also been an input to the DUNE computing model and the Computing Conceptual Design Report. This operational experience in the UK has prompted development of new systems and features in DUNE computing, particularly in the areas of data management with RUCIO and the DUNE Workflow System by UK institutes funded by the DUNE UK Construction Project.
The Electron-ion Collider to be constructed at Brookhaven National Lab is considered to be the next generation "dream machine" in future nuclear physics research. Extending the acceptance of the detector to the far forward region ($\eta > 4$) is extremely important for a wide range of measurements to be performed at EIC. The designs of the far-forward detectors (B0-spectrometer and electromagnetic calorimeter, Roman Pot and Off-Momentum dectectors and the Zero-Degree Calorimeter) proposed by the ECCE Consortium are described. Detection of forward-going particles with high energy and position resolution as well as two-photon separation reveal new possibilities to provide experimental access to various processes including pion form factor measurements, diffractive and photoproduction processes and u-channel DVCS. The prospects of such measurements exploiting, in particular the B0 and ZDC detectors, are also discussed.
FASER (ForwArd Search ExpeRiment) fills the axial blindspot of other, radially arranged LHC experiments. It is installed 480 meters from the ATLAS interaction point, along the collision axis. FASER will search for new, long-lived particles that may be hidden in the collimated reaction products exiting ATLAS. The tracking detector is an essential component for observing LLP signals. FASER's tracking stations use silicon microstrip detectors to measure the path of charged particles. This presentation is a summary of one of FASER's latest papers "The tracking detector of the FASER experiment" which describes the functionality, construction and testing of the tracker detector. FASER is currently installed in the LHC, where it is ready for data collection.
Since the discovery of a scalar particle with mass at 125 GeV in the experiments ATLAS and CMS at LHC, different measurements based on its properties have been performed and the observations nicely correspond to the Higgs boson predicted by the Standard Model of particle physics. Among these measurements, the fiducial and differential cross-section play an important role in the test of the SM predictions as well as in the probe for BSM physics contributions exploring a variety of physics observables. Given that these measurements are performed in a specific region of the phase space (fiducial region), the model dependence is reduced. This poster highlights the latest results on the differential and fiducial cross-section of the Higgs boson decay in the diphoton channel with full Run2 dataset (139 $\rm fb^{-1}$) collected by the ATLAS experiment.
We compute for the first time the finite size corrections to NLO $2\rightarrow{}2$ scattering in $\phi^4$ theory on a $\mathbb{R}^{1,(3-n)}\times \text{T}^n$ spacetime. In order to do so we developed multiple novel techniques, including denominator regularization, a generalization of a formula by Ramanujan using the sum of squares function, and an analytic continuation of the generalized Epstein Zeta function. We show that our calculations pass all consistency checks, and numerically as well as analytically examine the behavior of the scattering amplitude as well as the effective coupling. We discuss the implications for critical exponents in condensed matter systems as well as how denominator regularization might be further employed to simplify calculations involving fermions and curved spacetimes.
Most important, our results form a first step in quantifying analytically the finite size system effect on the trace anomaly in QCD, which may lead to significant corrections to the extracted viscosity to entropy density ratio in small systems.
This talk is based on arXiv:2203.01259 and W.A. Horowitz and JFDP in preparation.
Several astrophysical observations suggest that about 25% of all the energy in the Universe is due to a non-luminous, non-relativistic kind of matter, the "dark matter". Among all the possible models that can fulfill the observed abundance, one of the most promising are Weakly Interacting Particles (WIMPs), thermal relics with masses below 100 TeV. Despite the high number of attempts during the last two decades to directly detect WIMPs, no confirmed discovery has been made. Hence, the interest in other dark matter candidates has recently increased, even motivating the search for super-massive dark matter candidates. These dark matter candidates might have been produced non-thermally, as
radiation from primordial black holes, decay products of the inflaton, or as products of a dark sector with an extended thermal production mechanism.
DEAP-3600, with a target of 3.3 tonnes of liquid argon, is the largest running direct detection experiment. Even if it is designed for the WIMP search, it is also sensitive to candidates with masses above 10^{16} GeV and cross-sections in argon above 10^{-24} cm^2. Due to the high cross-section and the large area of the detector, the expected signal is a track of collinear nuclear recoils, resulting in a very peculiar signal, different from both WIMPs and most of the backgrounds. This motivated the development of a custom analysis, looking for a multi-scattering dark matter signal. Thanks to the quality of the selection cuts, four different Regions of Interests (ROIs) have been defined, each with a background level of much less than one event in three years of data taking. After the unblinding, no events were found, leading to world-leading constraints on two composite dark matter models, up to Planck-scale masses.
The IDEA Experiment envisaged at future $e^+e^-$ circular colliders (FCCee and CEPC) is currently under design and optimization with dedicated full-simulation investigations. In this talk, we review the performance of the IDEA fully-projective fiber-based dual-readout calorimeter using the GEANT4 toolkit, from calibration aspects to jet reconstruction. Results concerning complex topologies and the detector capability of identifying and disentangling single particles contribution with deep learning will be discussed as well. The ability to achieve dual-readout compensation in homogeneous crystals opens the possibility to instrument the hadronic calorimeter with a finely segmented crystal electromagnetic section, thus isolating photons contributions in jet and applying a proto-Particle-Flow approach for superior jet reconstruction. Results obtained with this hybrid configuration (the so-called IDEA crystal option) will be compared to the baseline experiment.
In this poster, we present a study of the rejection of jets containing more than one b-hadron in the ATLAS โonlineโ b-taggers, aiming to significantly reduce the readout rates of the ATLAS b-jet trigger system. It is important to be able to efficiently select events containing b-jets at the trigger level for analyses that involve many b-quarks in the final states, such as the search for HH to 4b production. However, in Run 2 the ATLAS b-tagger did not distinguish between jets containing a single b-hadron (b-jets) and jets containing 2 b-hadrons (bbjets). Collision events involving small-angle g to bb splitting, resulting in bb-jets are common in the LHC. Rejecting them in real-time would significantly reduce the readout rates of multi-b-jet triggers and ensure efficient signal extraction, which is particularly important for analyses that use multi-b-jet trigger chains. The proposed poster shows an approach to reject bb-jets in the ATLAS online btaggers, its impact on relevant trigger rates, and the implications for the ATLAS Run 3 physics program.
The muon-catalyzed fusion (ฮผCF) is an established method in which nuclear reactions occur at low temperatures (at or below room) and pressure. The reduced size of diatomic muonic molecules (say ddฮผ or dtฮผ) allows fusion to occur due to the greatly enhanced wave-function overlap. Under the current $dMu/DT$ collaboration, an attempt is being made to study the ฮผCF rate and sticking fraction at a relatively higher temperature (but <$3\times10^3$ K) and pressure (but <$10^5$ bar), using a diamond anvil cell with D-T mixture. In parallel, physics processes related to formation, transport, isotopic transfer, and other deexcitation processes of muonic atoms as well as ฮผCF and reactivation of muons to the fusion cycles are being modeled in GEANT4.
In this work, our physics model development effort with the classes available in the GEANT4 source, e.g. G4MuonicAtom, G4MuonMinusAtomicCaptureAtRest added to new classes and study parameters will be narrated. Currently, G4MuonicAtom is derived from G4Ion with the specific Z and A numbers, and it's formed when negative muon slows down and is captured by the atom. During the lifetime of a muonic atom (2.193 ฮผS); it undergoes several deexcitation processes, such as radiative process, Columb deexcitation, Auger process to come to the ground state; however; the excited state transfer:$(Dฮผ)_{n}+T->(Tฮผ)_{n}+D$ influences the initial population muonic atoms in ground state which is the initiation of a complex ฮผCF process. It is not only limited to D and T, rather the muonic atom gets transferred to other heavier nuclei. This transfer process depends on the interaction crosssection ($\sigma$) and $q_{1s}$, which is the probability of the lighter muonic atom coming to the ground state.
In the first stage, a separate class by the name of MuonicAtomTransfer has been worked out, where the excited state transfer based on these parameters has been devised with the the data available: Ref: PhysRevA.50.518. The output successfully carries out the transfer process before the muon is decayed in the orbit of the atom or captured by the nucleus. Later, diatomic molecule formation and in turn ฮผCF is being devised based on the interaction lengths and sticking probabilities available from
Ref: https://muon.npl.washington.edu/elog/mucap/Talks+and+Presentations/091118_104846/dd.pdf
& LAMPF data considering several possible channels such as
1. $dt\mu->\alpha \mu+n$ (sticking) or $dt\mu->\alpha+\mu+n$ (no sticking), yield: 14.1 MeV
2. $dd\mu->He_{3}\mu+n$ (sticking) or $dd\mu->He_{3}+\mu+n$ (no sticking), yield: 3.3 MeV
3. $tt\mu->n+n+\alpha\mu$ (sticking) or $tt\mu->n+n+\alpha+\mu$ (no sticking), yield: 11.3 MeV
The goal of the simulation is to compare measured sticking fractions with theory, and to simulate the effective sticking fraction (after reactivation) in various conditions of temperatures, pressure, and applied EM fields. The simulation is being carried out to support the experimental design of the diamond anvil cell ฮผCF chamber. We are also trying to use the MuonicAtom physics developed to simulate the effect of heavy impurities, such as wall materials & window materials. We are planning to submit our MuonicAtom->DiatomicMuonicMolecule->uCF->Reactivation package to the GEANT4 distribution including example codes.
Ref: https://arpa-e.energy.gov/technologies/projects/conditions-high-yield-muon-catalyzed-fusion
Run-2 of the LHC commences the precision era on the energy frontier in particle physics.This enables to perform measure important kinematic distributions which serves as input to constrain the Standard Model Effective Field Theory (SMEFT). SMEFT provides a global interpretationvframework which is model independent where measurements of different processes can be consistently interpreted to search for indirect signatures of undiscovered physical phenomena, which occur at energies much larger than those reached by particle collisions at the LHC. In this poster I will discuss the results from a global SMEFT fit to Run-2 data from the ATLAS experiment which includes combined measurements of Higgs and electroweak processes. This includes kinematic properties of Higgs production measured across five decay modes in the STXS (simplified template cross-sections) framework and differential distributions from the production of WW, WZ, ZZ, and Z+2jets in the electroweak sector. Together with the electroweak precision observables measured at LEP, these measurements allow to pin down the allowed deviations from the Standard Model in SMEFT.
In the event reconstruction, we need to exact the photon electron(PE) hit time and PE charge from waveforms. We developed a new method called Fast Scholastic Matching Pursuit(FSMP). It is based on a Bayesian principles, and the possible solutions are sampled with Markov Chain Monte Carlo(MCMC). To accelerate the method, we ported it to GPU, and could analysis the waveforms with 0.01s per waveform. This method will benefit event reconstruction. The position and energy resolution will be improved, as the method extracts all the information in the waveforms.
The ever-increasing demands of CERN's Large Hadron Collider and the different projects of future colliders lead the High Energy Physics community to pose quantum computing in the spotlight due to the advantages that can be obtained compared to classical computing. In this context, we explore quantum search algorithms and present a novel benchmark application of a modified version of Grover's algorithm for the identification of causal singular configurations of multiloop Feynman diagrams in the Loop-Tree Duality framework, obtaining a quadratic speed-up over classical algorithm. The output of the algorithm in IBM Quantum and QUTE simulators is used to bootstrap the causal representation of representative multiloop topologies. The algorithm may also find application and interest in graph theory to solve problems of directed acyclic graphs.
Gravitational-wave detectors are very sophisticated instruments devoted to the formidable task of measuring space-time deformations as small as a thousandth the size of the atomic nucleus, such as those produced by astrophysical phenomena like the coalescence of compact binary systems. GWitchHunters is a new citizen science initiative developed within the REINFORCE project (funded under the H2020 โScience With And For Societyโ program), aimed at promoting the study of the noise of gravitational-wave detectors and the improvement of their sensitivity. To achieve this goal, gravitational-wave data is presented to the citizens in the form of images and sounds, on which citizens are asked to perform quick-look analysis, such as identifying relations and patterns among them. This constitutes an important input to the detector characterization activity carried out by the researchers. To make the work done by the participants even more enjoyable, we have made use of the Zooniverse web platform and mobile app, where citizens can get entertained while learning and actively contributing to real science. We will report on the status of the project as well as on its impact on the study and characterization of noise in the Advanced Virgo detector.
Modern accelerator-based neutrino experiments use complex nuclei, such as argon, as neutrino targets that rely on nuclear models to unfold the reconstructed neutrino energy to the true neutrino energy. The nuclear effects complicate the neutrino oscillation measurements and are not well-understood, and there are very limited measurements of hadron cross sections on argon. ProtoDUNE-SP, a prototype liquid argon time projection chamber for the DUNE far detector, collected data from a hadronic test beam at CERN in 2018, including protons, pions and kaons in the range 1 to 7 GeV/c. In this talk, we will present the status and results of the many hadron-argon cross-section analyses, and plans for a second data-taking period at the end of this year.
Short-lived hadronic resonances are good probes to investigate the late-stage evolution of ultra-relativistic heavy ion collisions. Since they have lifetimes comparable to that of the system created after the collision, the measured yields may be affected by the competing rescattering and regeneration processes during the hadronic phase, which modify the particle's momentum distributions after hadronization. Measurements of the production of resonances characterized by different lifetimes, masses, quark content, and quantum numbers can be used to explore the different mechanisms that influence the shape of particle momentum spectra, the dynamical evolution and lifetime of the hadronic phase, strangeness production, and collective effects. Furthermore multiplicity dependent analyses of resonance production in pp and p--Pb collisions could highlight the possible onset of collective-like phenomena even in small systems. The ALICE experiment has collected data from several collision systems at LHC energies and the latest results on hadronic resonance production, like $\rho$(770)$^0$, K$^*$(892)$^{\pm}$, $\Sigma$(1385)$^{\pm}$, $\Xi$(1530)$^0$, and $\Phi$(1020) in pp and p--Pb collisions will be presented here.
The event of astroparticle collision at high energy was detected in 1975 during the balloon flight in the stratosphere. The data of hundred particle tracks in x-ray films have been re-analyzed in the style of LHC experiments: rapidity distributions of charged particles and transverse mass spectra of multi-particle production have been built. The comparison of multiple histograms with the expectations of the Quark-Gluon String Model (QGSM) gives us, at first sight, the conclusion that it might be the carbon nucleus collision with the matter of atmosphere at the c.m.s. equivalent energy โs โฅ5 TeV.
After QGSM analysis of these scarce data, we know the following: the value of maximal rapidity of one projectile proton and the density of particle multiplicity in the central rapidity region. Besides this, the transverse mass distributions show us how many protons are in every particular range of rapidity. In such a way, we certainly can distinguish how this astroparticle interaction is similar to or differs from the average A-A collision event at LHC. Nevertheless, the data indicate the features that cannot be associated with nucleus-nucleus collision: one particle with transverse mass 16 GeV was detected and a small nucleon population is seen in the region of projectile fragmentation that doesnโt correspond to the carbon nucleus collision. Both facts make us convinced that there might be a baryonic DM decay. These quasi-stable baryon-antibaryon neutral states have been suggested in the earlier paper (Piskounova O., 2018). They are to be formed under the huge gravitation pressure at giant massive objects like Black Holes. The relativistic jets are spreading baryonic DM in space. Their collisions with ordinary matter have to give the different pattern than A-A interactions. The important difference between this form of matter and the ordinary nucleus lies in the results of collision: baryonic DM is the object, where proton-antiproton String Junctions are strongly connected, so the energy between nucleon components is divided by Regge type of structure function, like for quarks in the proton. The lightest debris of baryonic DM particle interacts with the maximal rapidity and gives the small number of nucleons in the forwarding part of spectra. Baryonic DM can also split into the pair of similar DM with lower mass giving an unusual couple of hadrons with mass like 14 GeV and heavier.
Finally, we conclude that the cosmic ray experiments on the high altitudes in the atmosphere are, on one hand, good supplements to the LHC measurements. On the other hand, they are able to discover events of unknown astroparticle collisions in the full kinematical range, while colliders are studying nuclear interactions only in the central rapidity region. Such experiment, which is detecting the very first collision of the astroparticle with the atmosphere, but it is preferred to be constructed with the application of up-to-date electronic methods.
With the data collected during the ATLAS Run-2, a combination of measurements of Higgs boson production cross sections and branching fractions is presented. Compared with the previous combination measurement, Zy decay mode is included for the first time. And also a few additional production processes in the bb and tautau decays channels. Several of the previous input measurements are updated to the full Run2 data set. The global signal strength, defined as the measured Higgs boson signal yield normalized to its SM prediction, is determined to be 1.06$\pm$0.06. Measurements in kinematic regions defined within the simplified template cross section framework are also reported. The results are interpreted in terms of modifiers applied to the Standard Model couplings of the Higgs boson to other particles, and are also used to set exclusion limits on parameters in the Standard Model Effective Field Theory framework and in several benchmark scenarios of the Two Higgs Doublet Model. No significant deviations from Standard Model predictions are observed.
The Higgs boson trilinear and quartic self-couplings are directly related to the shape of the Higgs potential; measuring them with precision is extremely important, as they provide invaluable information on the electroweak symmetry breaking and the electroweak phase transition. In this paper, we perform a detailed analysis of double Higgs boson production, through the gluon gluon fusion process, in the most promising decay channels di-bottom-quark di-photons, di-bottom-quark di-tau, and four-bottom-quark for several future colliders: the HL-LHC at 14 TeV and the FCC-hh at 100 TeV, assuming respectively 3 inverse ab and 30 inverse ab of integrated luminosity. In the HL LHC scenario, we expect an upper limit on the di Higgs cross section production of 0.76 at 95% confidence level, corresponding to a significance of 2.8 sigma. In the FCC-hh scenario, depending on the assumed detector performance and systematic uncertainties, we expect that the Higgs self-coupling will be measured with a precision in the range 4.8-8.5% at 95% confidence level.
This poster will present the first measurement of high-energy reactor antineutrinos at the Daya Bay experiment. Based on the data collected over 1958 days, the Daya Bay experiment has observed about 9000 inverse beta decay candidates in the prompt energy region of 8-12 MeV from six commercial reactors. A multivariant analysis is applied to separate ~2500 signal events from backgrounds statistically. As a result, the hypothesis of no reactor antineutrinos with energy above 10 MeV is rejected with a significance of 6.2 standard deviations. This first direct measurement of high-energy reactor antineutrinos provides a unique data-based reference for other experiments and theoretical calculations.
This work describes a burst detector (BD), consisting of ionization chambers, located at an altitude of 3340 m a.s.l. near Almaty (Kazakhstan).
The high-mountain BD is based on the prototype described earlier in [1]. The experimental data obtained from the prototype of the BD showed a good potential for creating a full-scale setup for studying the cores of extensive air showers. The BD consists of 72 ionization chambers placed perpendicular to each other in two layers. Experimental events are synchronized using a GPS receiver.
At the moment, preliminary experimental data from the BD have been obtained and analyzed.
Reference
1) O.A. Kalikulov, N.O. Saduyev et.al Study of the spatiotemporal structure of extensive air showers at high energies. 2022 JINST 17 C04014
We employ the method of Padรฉ approximants to study the higher-order corrections of the massless scalar-current quark correlator. We begin by testing this method in the large-$\beta_0$ limit of QCD, where the perturbative series is known to all orders, using it as a testing ground to determine the best strategy to build the series at higher orders using only the first four coefficients. Applying the procedure in QCD, we estimate the yet unknown coefficient of order $\alpha_s^5$ (six loops) of the imaginary part of the correlator, directly related to $\Gamma(H \to b \bar{b})$, in a model-independent way as $-6900 \pm 1400$. We conclude that with this correction the series is almost insensitive to renormalization scale variations. This corroborates that the QCD corrections to this decay are under excellent control and the uncertainty of $\Gamma(H \to b\bar{b})$ will continue to be dominated by the Standard Model parameters in the near future, mainly the strong coupling and the bottom-quark mass.
A maverick top partner model, decaying to a dark photon was suggested. The dark photon decays to two overlapping electrons for dark photon masses of 100 MeV, and results in a so-called lepton-jet. Leptons jets are mostly unexplored objects in collider searches, and no hints of new physics so far at the LHC makes these unusual topologies attractive. The event includes a top quark as well, which results in events with two boosted objects, one heavy and the other ultra-light. We propose a search strategy exploiting the unique signal topology. We show that for a set of kinematic selections, both in hadronic and leptonic decay channel of the SM top quark, almost all background can be eliminated, leaving enough signal events up to top partner mass of about 3 TeV for the search to be viable at the LHC.
With the large datasets on ๐+๐โ-annihilation at the ๐ฝ/๐ and ๐(3686) resonances
collected at the BESIII experiment, multi-dimensional analyses making use of
polarization and entanglement can shed new light on the production and decay
properties hyperon-antihyperon pairs. In a series of recent studies performed at
BESIII, significant transverse polarization of the (anti)hyperons has been observed
in ๐ฝ/๐ or ๐(3686) to ฮฮ ฬ , ฮฃฮฃ ฬ , ฮฮ ฬ, and ฮฉ - anti- ฮฉ + and the spin of ฮฉโ has
been determined model independently for the first time. The decay parameters for the
most common hadronic weak decay modes were measured, and due to the non-zero
polarization, the parameters of hyperon and antihyperon decays could be determined
independently of each other for the first time. Comparing the hyperon and
antihyperon decay parameters yields precise tests of direct, ฮ๐ = 1 CP-violation
that complement studies performed in the kaon sector.
Femtoscopy is a technique that can be used to measure the space-time characteristics of the particle-emitting source created in heavy-ion collisions using momentum correlations between two particles. In this report, the two-pion and two-kaon femtoscopic correlations for Pb$-$Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV within the framework of (3+1)D viscous hydrodynamics combined with THERMINATOR 2 code for statistical hadronization. The femtoscopic radii or the source size for pions and kaons are estimated as a function of pair transverse momentum and centrality in all three pair directions. The radii seems to be decreasing with pair transverse momentum and transverse mass for all centralities which signals to the presence of strong collectivity in the system. Moreover, an effective scaling of radii with pair transverse mass was observed for both pion and kaons.
We report on our recent results on production mechanism of the famous $X(3872)$ meson [1] which structure is not well known. We calculate the total cross section and transverse momentum distributions for the production of enigmatic $\chi_{c,1}(3872)$ (or X(3872)) assuming different scenarios:
$c \bar c$ state and $D^{0*} {\bar D}^0 + D^0 {\bar D}^{0*}$ molecule.
The derivative of the $c \bar c$ wave function needed in the first scenario is taken from a potential model calculations. Compared to earlier calculation of molecular state we include not only
single parton scattering (SPS) but also double parton scattering (DPS) contribution. The latter one seems to give smaller contribution than the SPS one. The upper limit for the DPS production of $\chi_{c,1}(3872)$ is much below the CMS data. We compare results of our calculations with existing experimental data of CMS, ATLAS and LHCb collaborations. Reasonable cross sections can be obtained in either $c \bar c$ or molecular $D {\bar D}^*$ scenarios for $X(3872)$. Also a hybrid scenario is not excluded.
[1] A. Cisek, W. Schaefer and A. Szczurek,
arXiv:2203.07827.
The implementation of a web portal dedicated to Higgs boson research is presented. A database is created with more than 1000 relevant articles using CERN Document Server API and web scraping methods. The database is automatically updated when new results on the Higgs boson become available. Using natural language processing, the articles are categorised according to properties of the Higgs boson and other criteria. The process of designing and implementing the Higgs Boson Portal (HBP) is described in detail. The components of the HBP are deployed to CERN Web Services using the OpenShift cloud platform. The web portal is operational and freely accessible on http://cern.ch/higgs.
JUNO (Jiangmen Underground Neutrino Observatory) is a 20 kton scintillation detector aimed to study fundamental properties of neutrinos such as neutrino mass ordering and oscillation parameters. The experiment is currently under construction in Kaiping, China and is expected to be commissioned next year. To reach its goals, JUNO will strongly rely on the accurate description of the scintillator. This includes the emission spectrum of the scintillator, the contribution of Cherenkov light and the characteristic times and weights of the fluorescence components.
SHELDON (Separation of cHErenkov Light for Directionality Of Neutrinos) is a small-scale setup developed to determine the contribution of Cherenkov light in the scintillator cocktail used in JUNO, as well as to measure its fluorescence parameters more accurately than ever before.
I will report on the accurate measurement of the characteristic times and weights included in the description of the fluorescence process, emphasizing the impact of Cherenkov light separation and of a thorough characterization of the setup on the accuracy of the results. Moreover, I will present the next steps that will enable the evaluation of the Cherenkov contribution as well as the separation of Cherenkov light and its possible use to determine the direction of neutrinos interacting in JUNO.
In preparation for LHC Run 3, ATLAS completed a major effort to improve the track reconstruction performance for prompt and long-lived particles. Resource consumption was halved while expanding the charged-particle reconstruction capacity. Large-radius track (LRT) reconstruction, targeting long-lived particles (LLP), was optimized to run in all events expanding the potential phase-space of LLP searches. The detector alignment precision was improved to avoid limiting factors for precision measurements of Standard Model processes. Mixture density networks and simulating radiation damage effects improved the position estimate of charged particles overlapping in the ATLAS pixel detector, bolstering downstream algorithms' performance. The ACTS package was integrated into the ATLAS software suite and is responsible for primary vertex reconstruction. The talk will highlight the above achievements and report on the readiness of the ATLAS detector for Run 3 collisions and hopefully some fresh 13.6 TeV collision data!
Transport properties of quark-gluon plasma (QGP) created in ultra-relativistic heavy-ion collisions, contain important information on quantum chromodynamics (QCD). With a more precise estimate of the transport properties, such as specific shear and bulk viscosity, it is possible to deepen our understanding of QCD. In this talk, we present our latest study in inferring the transport properties of QGP by an improved Bayesian analysis using the CERN Large Hadron Collider Pb-Pb data at sNN=2.76 and 5.02 TeV. We show that the uncertainty of the transport coefficients is significantly reduced by including the latest flow harmonic measurements, reflecting mostly nonlinear hydrodynamic responses. The analysis also reveals that higher-order harmonic flows and their correlations have a higher sensitivity to the transport properties than the other observables. This observation shows the necessity of accurate measurements of these observables in the future.
A search for standard model Higgs bosons produced with transverse momentum greater than 450 GeV and decaying to charm quark-antiquark pairs is performed using proton-proton collision data collected by the CMS experiment at the LHC at 13 TeV. The search is inclusive in the Higgs boson production mode. Highly Lorentz-boosted Higgs bosons are reconstructed as single large-radius jets and are identified using a dedicated tagging technique based on a Deep Neural Network. The method is validated with Z to charm quark-antiquark pair decays and this process is observed for the first time in the Drell-Yan production mode at a hadron collider.
Despite modern particle physics being an international endeavour, the vast majority of its educational material is only published in English. By making material available in other languages, physicists can make in-roads with new audiences โ especially those very young or very old โ in their home countries. The ATLAS Collaboration has published colouring books, a teaching guide, activity sheets, fact sheets and cheat sheets aimed at communicating science to a non-expert audience. An effort is underway to translate this content into as many languages as possible, taking advantage of the countless multilingual members of the collaboration. Currently all of this content is available in at least two languages other than English, with the ATLAS Colouring Book being the one available in the most languages (19 so far). The reach of this multilingual content is presented.
We report on an update (2021) of a phenomenological model for inelastic neutrino- and electron-nucleon scattering cross sections using effective leading order parton distribution functions with a new scaling variable $\xi_w$. Non-perturbative effects are well described using the $\xi_w$ scaling variable in combination with multiplicative $K$ factors at low $Q^2$. The model describes all inelastic charged-lepton-nucleon scattering data (HERA/NMC/BCDMS/SLAC/JLab) ranging from very high $Q^2$ to very low $Q^2$ and down to the $Q^2=0$ photo-production region. The model has been developed to be used in analysis of neutrino oscillation experiments in the few GeV region. The 2021 update accounts for the difference between axial and vector structure function which brings it into much better agreement with neutrino-nucleon total cross section measurements. The model has been developed primarily for hadronic final state masses $W$ above 1.8 GeV. However with additional parameters the model also describe the $average$ neutrino cross sections in the resonance region down to $W$=1.4 GeV.
We have written a general purpose code for analytical inversion of large matrices in C language by treating matrices in block forms. We have optimized the computation speed using in-place inversion, dynamic memory handling and recursion techniques. This code is written to adopt with programs which requires the faster and exact solution of system of linear equations in C and Fortran. We have applied it in our study of tau neutrino events at India based Neutrino Observatory (INO). As example, with this program the time required for computing the exact inversion of matrices of order 100, 1000 are 6ms and 6.2s respectively in Intel i7 6700 CPU, 8GB RAM machine. We also present our alternate technique and results of computing inversion of ultra large matrices (of order > $10^4$) using parallel processing in clusters.
Studying the energy and multiplicity dependence of strange hadron production in pp collisions provides a powerful tool for understanding similarities and differences between small and large collision systems. The charged-particle multiplicity is an important characteristic of the hadronic final state of a pp interaction, but it also reflects the initial dynamics of the collision being strongly correlated with the energy effectively available for particle production in its initial stages (effective energy).
A new multi-differential analysis is performed to separate initial and final state effects on strangeness production in small collision systems. The production of (multi)strange hadrons is studied in pp collisions at $\sqrt{s}$ = 13 TeV as a function of the charged-particle multiplicity measured at midrapidity and the forward energy detected by ALICE Zero Degree Calorimeters.
The results provide new insights into the role of initial state effects on strangeness production.
In this study, the total macroscopic cross sections of thermal and fast neutron interactions with quartz, glass, and some elements such as Al, W, stainless steel doped with B2O3 and Gd2O3 were computed by using Monte Carlo N-Particle Code (MCNP6.2). Also, the macroscopic effective removal cross-sections of fast neutron interactions were theoretically calculated based on the mass removal cross-section values for various elements in materials and additives. The results show that the highest value for both thermal neutron total macroscopic cross-section and fast neutron total macroscopic cross-section were obtained with Gd2O3 doped glass. Besides, Gd2O3 doping gives the highest fast neutron total macroscopic cross-section among all additives. The results of this study provide a good understanding of the shielding properties of Quartz, glass, and some other elements such as Al, W, stainless steel, doped with B2O3 and Gd2O3 for thermal and fast neutrons.
Jet flavour identification algorithms are of paramount importance to maximise the physics potential of the Future Circular Collider (FCC). As one example, out of the extensive FCC-ee physics program, flavour tagging is crucial for the Higgs program, given the dominance of hadronic decays of the Higgs boson. A highly efficient discrimination of b, c, strange, and gluon jets allows to access novel decay modes that cannot be identified at the LHC, adding quantitatively new dimensions to the Higgs physics programme.ย In this contribution, we will present new jet flavour identification algorithms based on machine leargning (ML) techniques that exploit particle level information, and its application to FCC-ee physics events. Beyond an excellent performance on b- and c-quark tagging, it is able to discriminate also jets from strange quarks hadronization, opening the way to improve the sensitivity to the Higgs to strange coupling. The impact of different detector design assumptions on the flavour tagging performance is assessed using the two baseline detector concepts, IDEA and CLD.
The Jiangmen Underground Neutrino Observatory is a 20~kton and $3\%/\sqrt{E ({\rm MeV})}$ energy resolution multi-purpose liquid scintillator detector located at a 700~m underground laboratory in the south of China (Jiangmen city, Guangdong province). The exceptional energy resolution and the massive fiducial volume of the JUNO detector offer great opportunities for addressing many essential topics in neutrino and astroparticle physics. JUNO's primary goals are to determine the neutrino mass ordering and precisely measure the related neutrino oscillation parameters. By looking at the visible signal of the final states, JUNO has excellent potential in atmospheric neutrino event energy and direction reconstruction. Thus, the atmospheric neutrino measurement at JUNO can provide vital information for neutrino physics. This poster presents the JUNO mass ordering sensitivity analysis of the atmospheric neutrinos. With the potential of energy, direction reconstruction, and particle identification performance, atmospheric neutrinos at JUNO can help determine the neutrino mass ordering.
Kaon production cross sections provide a crucial constraint on K+ production by atmospheric neutrinos in proton decay searches. Current neutrino-nucleus event generators largely rely on theoretical models for the descriptions of backgrounds due to kaons and need to be verified by measurements. The event rate for these processes is low as compared to pion production channels because of Cabibbo suppression and the relatively large kaon mass. Recent measurements with large statistics for kaon production were reported by the Minerva experiment at higher neutrino energies. T2K measures this process at lower energies close to the threshold for strangeness production where existing measurements from bubble chambers have limited statistics. This search for charged-current neutrino interactions that produce a K+ in the final state was performed in the ND280 Fine Grained Detector (FGD), a scintillator-based tracking calorimeter within the T2K Near detector. Events with a K+ are identified in T2K by studying the energy deposition of tracks in the Time Projection Chamber. This poster will show the latest results for the selected kaon sample together with the method used to estimate the backgrounds and evaluate a one-bin cross section in the restricted phase-space.
A Cosmic Muon Veto (CMV) detector using extruded plastic scintillators is being built around the mini-Iron Calorimeter (mini-ICAL) detector at the transit campus of the India based Neutrino Observatory, Madurai. The extruded plastic scintillators will be embedded with wavelength shifting (WLS) fibres which emitted photons of longer wavelengths and propagate those to silicon photo-multipliers (SiPMs). The SiPMs detect these photons, producing electronic signals. The CMV detector will require more than 700 scintillators to shield the mini-ICAL detector, and will require about 3000 SiPMs for the readout. The design goal for the cosmic muon veto efficiency of the CMV is $>$99.99% and fake veto rate less than 10$^{-5}$. Hence, every SiPM used in the detector needs to be characterised to satisfy the design goal of the CMV. A large-scale testing system was developed, using an LED driver, to measure the gain and noise rate of each SiPM, and thus determine its over-voltage ($V_{ov}$). The test data and the analysed characteristics of about 3.5k SiPMs will be presented in this paper.
Heavy-quark symmetry (HQS), despite being approximate, allows to relate dynamically many hadron systems. In the HQS-limit heavy mesons and doubly-heavy baryons are very similar as their dynamics is determined by a light quark moving in a color field of a static source. As in the meson case, matrix elements of non-local interpolation currents between the baryon state and vacuum are determined by light-cone distribution amplitudes (LCDAs). The first inverse moment of the leading twist $B$-meson distribution amplitude is a hadronic parameter needed for an accurate theoretical description of $B$-meson exclusive decays. It is quite natural that a similar moment of doubly-heavy baryon is of importance in exclusive doubly-heavy baryons' decays. We obtain HQET sum rules for the first inverse moment based on the correlation functions containing nonlocal heavy-light operator of the doubly-heavy baryon and its local interpolating current. Numerical estimates of this moment are presented.
In view of the HL-LHC, the Phase-2 CMS upgrade will replace the entire trigger and data acquisition system. The detector readout electronics will be upgraded to allow a maximum L1A rate of 750 kHz, and a latency of 12.5 ยตs. The upgraded system will be entirely running on commercial FPGA processors and should greatly extend the capabilities of the current system, being able to maintain trigger thresholds despite the harsh environment as well as trigger on more exotic signatures such as long-lived particles to extend the physics coverage. The muon trigger should be able to identify muon tracks in the experiment and measure their momenta and other parameters for use in the global trigger menu. In addition to the muon detector upgrades that include improved electronics and new sub-detectors, the presence of a L1 track finder in CMS will bring some of the offline muon reconstruction capability to the L1 trigger, delivering unprecedented reconstruction and identification performance. With this contribution, we review the current status of the design of highly efficient muon trigger, its architecture, and the foreseen muon reconstruction and identification algorithms.
This contribution presents an update on the Analytical Method (AM) algorithm for trigger primitive (TP) generation in the CMS Drift Tube (DT) chambers during the High Luminosity LHC operation (HL-LHC or LHC Phase 2). The algorithm has been developed and validated both in software with an emulation approach, and through hardware implementation tests. The algorithm is mainly divided in the following steps: a grouping (pattern recognition) step that finds the path of a given muon, a fitting step to extract the track parameters (position and bending angle), a correlation step that matches the information from the different super-layers and with signal from the Resistive Plate Chambers. Agreement between the software emulation and the firmware implementation, has been verified using different data samples, including a sample of real muons collected during 2016 data taking. In this contribution, an update of the grouping step using a pseudo-bayes classifier will be discussed.
The upcoming High-Luminosity LHC will allow 200 proton-proton collisions per bunch crossing on average, thus creating highly complex events demanding efficient data reconstruction and processing. In order to meet these requirements, the Compact Muon Solenoid (CMS) experiment is upgrading its Level-1 trigger system. Among these updates will be the reconstruction of charged particle tracks in the silicon tracker, enabling more precise track selection further down the pipeline. In this work, we will present the development of a track quality variable which combines many of the reconstructed track properties into one feature that describes whether the track is real or fake, or whether the reconstruction represents a genuine particle or not. Using machine learning techniques, track quality can be evaluated and used to select tracks efficiently and quickly while barely using computational resources. This track quality variable has immense value to standard model searches requiring exact reconstruction such as missing energy analyses.
The TRSM is a new physics model that extends the scalar sector of the SM by two additional CP even scalars. It leads to a large variety of interesting signatures, some of which have not yet been explored by the LHC experiments. I will also discuss the option to explore the hhh final state within this model.
At BESIII, the lineshapes of $e^+e^- ->\phi \eta', \phi \eta, K K, \omega \pi^0, \eta \pi \pi,$ and $\omega \pi \pi$ are measured from 2.0 to 3.08 GeV, where resonant structures are observed in these processes. Multiple lineshapes of intermediate state are obtained by a partial wave analysis of $e^+e^- ->K^+ K^- \pi^0 \pi^0, K^+K^- \pi^0$ and the structures observed provide essential input to understand the nature of $\phi(2170)$. These results provide important information for light flavor vector mesons i.e. excited $\rho, \omega$ and $\phi$, for energy regions above 2 GeV.
The worldโs largest sample of J/ฯ events accumulated at the BESIII detector offers a
unique opportunity to investigate ฮท and ฮทโฒ physics via two body J/ฯ radiative or
hadronic decays. In recent years the BESIII experiment has made significant
progresses in ฮท/ฮทโฒ decays. A selection of recent highlights in light meson
spectroscopy at BESIII are reviewed in this report, including the observation of ฮทโฒ
โ ฯ+ฯโฮผ+ฮผโ, observation of the cusp effect in ฮทโฒ โฯ0ฯ0ฮท, search for CP-violation in
ฮทโฒ โ ฯ+ฯโe+eโ, as well as the precision measurement of the branching fraction of ฮท
decays.
The linear accelerator Linac-200 at JINR is a new facility, constructed to provide electron test beams to carry out particle detectors R&D, to perform studies of advanced methods of electron beam diagnostics, and for applied research. The core of the facility is a refurbished MEA accelerator from NIKHEF. The key accelerator subsystems including controls, vacuum, precise temperature regulation were completely redesigned or deeply modernized. Two test beam channels are available for users: the first one with electron energy in range 5โ25 MeV and maximum pulse current 60 mA; and the second one with electron energy in range 40โ200 MeV and maximum pulse current 40 mA. The pulse current varies smoothly from the maximum value down to almost zero (single electrons in a pulse). This report presents the status and operation parameters of the facility.
Cross section measurements for heavy-ion collision processes require a precise estimate of the integrated luminosity of the recorded data set. During the 2015-2018 data-taking period (โRun 2โ) of the LHC, lead-lead, proton-lead, and proton-proton collisions at the reference energy were recorded with the CMS experiment. For these data sets, the luminosity measurements are reported. The absolute luminosity scale is calibrated with beam-separation (โvan der Meerโ) scans, performed separately for each collision system. Several systematic uncertainty sources are studied and corrected for in the analysis of the van der Meer scan data to improve the precision. When applying the van der Meer calibration to the entire data-taking period, a substantial contribution to the total uncertainty in the integrated luminosity originates from the measurement of the detector stability.
In high energy physics experiments, the scintillating materials or the Cherenkov radiators are widely used which emit light pulses at certain wavelength when interacting with the incident particle. The information carried by the light pulse reflects the characteristics of the incident particle. Since the light directly emitted is always weak (restricted by the light yield of the medium), the photomultiplier tube (PMT) is used to realize the photoelectric conversion and the electron multiplication so that the signal output by the PMT can be discriminated by the back-end data acquisition system. The time to digital convertor (TDC) and charge to digital convertor (QDC) are two kinds of waveform analysis plug-in which can record the time and charge information of the input waveform. In recent years, the development of the fast analog-to-digital converters (FADC) which can record the whole waveform makes it possible to offline analyze the information carried by the waveform with different methods and obtain more information.
Pulse shape discrimination is a waveform-based method to discriminate between different kinds radiations. For the potassium cryolite crystals, the main difference between the gamma and neutron pulses is the presence of fast components on the falling edge with different time constants, which leads to the difference of the shape of gamma and neutron pulses. The traditional used PSD method here is the charge comparison method whose performance strongly depends on the energy range of the incident particle. Inspired from literatures, a model based on convolutional neural network (CNN) was developed and the accuracy of the n/ฮณ discrimination for single-particle waveform for both CLYC crystal and CLLB crystal can reach 99%.
Besides the energy information carried by the pulse, the time information is also important when reconstructing the particle trajectory especially in the application of time-of-flight detectors. The traditionally used timing methods, including the leading edge discrimination (LED) and the constant fraction discrimination (CFD), are easily realized in the circuit but obviously the time information of the pulse has not been precisely obtained since only a part of the points are used in the waveform. Former studies have shown that the CNN-based model can improve the timing performance of the paired PMTs by nearly 20%. But the methods show a bios of regression since the number of labels are limited. Therefore, we developed a new method to train a CNN model for the timing of the paired PMTs. Instead of using real waveforms with different distances to the radiative source, only a group of waveforms are obtained with fixed distance to the radiative source. The paired waveforms input with different labels are produced by delaying or advancing the time of one waveform. The results show a 50% improvement with the CFD method at 50% threshold. The validations process is still going on to prove the model is well-trained for the real paired waveforms with different distances to the radiative source.
In high-energy particle physics, complex Monte Carlo simulations are needed to connect the theory to measurable quantities. Often, the significant computational cost of these programs becomes a bottleneck in physics analyses.
In this contribution, we evaluate an approach based on a Deep Neural Network to reweight simulations to different models or model parameters, using the full kinematic information in the event. This methodology avoids the need for simulating the detector response multiple times by incorporating the relevant variations in a single sample.
We test the method on Monte Carlo simulations of top quark pair production used in CMS, that we reweight to different SM parameter values and to different QCD models.
The proposed ICAL detector is designed to detect muons generated from interaction of $\nu_{\mu}$ and anti-$\nu_{\mu}$ with Iron. It is designed with a maximum Magnetic field of about 1.5 Tesla (with 90% of the its volume having > 1 Tesla magnetic field). The purpose of using magnetic field is charge identification and momentum reconstruction of the muons. The mini-ICAL is a fully functional 85-ton prototype detector. It consists of 11 layers of iron and 10 layers of RPCs placed in the air gap between the iron layers. Each iron layer is made up of 7 plates of soft iron. There are two sets of copper coils through which the current is passed to produce magnetic field in the detector. One of the main challenges of the mini-ICAL detector is to produce the required B-field and to measure it as accurately as possible to study muons. A comparison between the measured B-field with 3-D finite element electromagnetic simulations is done to find the correlation between the two B-field values.
For the purpose of measurement of B-field in the detector, Hall sensor PCBs and search coils are used. Hall sensor provide real time measurement of B-field and search coil provides B-field values during the ramp up and down of the current through the copper coils. Calibration and systematic study of characteristics of the Hall sensors which are used for measurement are carried out. Out of 11 layers of iron, 3 layers (1, 6 and 11) have provision for measurement of B-field using Hall sensor and search coils. In the mentioned layers, the gap between the adjacent plates is kept 3-4 mm for the purpose of inserting of the Hall sensor PCBs. A set of 5 search coils are wound around the iron plates at suitable locations in the same layers. In the rest of the layers, the gap between the plates is kept 2 mm.
The static 3-D simulation is done using MAGNET 7.7 software for the 11-layer model and single-layer model of mini-ICAL. Optimization of various parameters (mesh size, etc) is done for the iron as well as for the air. Full geometry is simulated for different values of the coil current. A detailed comparison between the measured B-field and simulated B-field will be presented in this paper. This will help in completing the study on the final magnetic field configuration of ICAL.
The latest results from KamLAND (Japan) and Borexino (Italy) experiments give us an unprecedented opportunity to investigate the inner Earth. For almost 20 years these experiments have been collecting the feeble signal coming from geoneutrinos, the electron antineutrinos produced in the 238U and 232Th decay chains inside our planet. The energy released in these radioactive decays (i.e., the radiogenic power) together with the slow secular cooling of our planet represents one of the main heat sources powering the internal dynamic processes of the Earth. Since 238U and 232Th release heat and geoneutrinos in a well-fixed ratio, the measurement of the geoneutrino flux at Earthโs surface permits to constrain the uranium and thorium content of our planetโs and in turn to derive the terrestrial heat power.
We present insights on mantle radioactivity and on the contribution of radiogenic heat to the Earthโs energy budget, obtained from the combination of latest geoneutrinos results from KamLAND and Borexino and an exhaustive review of crustal models. A comprehensive statistical framework combining experimental uncertainties and correlations arising from geochemical and geophysical modeling allowed us to recover a robust estimate for the mantle geoneutrino signal of 8.9+5.1โ5.5 TNU (corresponding to a radiogenic heat production of 12.5+7.1โ7.7 TW), representing the most precise estimate of the mantle geoneutrino signal to date. The obtained results have been discussed and framed in the puzzle of the diverse Earthโs compositional models, analyzing their implications on planetary heat budget and composition. The presented methodology may be used in the analysis of the future results expected from SNO+ (Canada) and JUNO (China) experiments.
Employing the hyperspherical approach, we study the ground and excited state mass spectra of the non-strange single charm baryons. Introducing an ansatz method, we solve the Schrodinger equation. The hyperfine interactions are considered as a perturbation in our calculation. We extend our scheme to predict the magnetic moments and radiative decay widths of the baryons. We compare our results with the predictions obtained by other models and with the available experimental data.
The Jiangmen Underground Neutrino Observatory (JUNO) is a neutrino medium baseline experiment under construction in southern China, expecting to begin data taking in 2023. The experiment has been proposed with the main goals of determining the neutrino mass ordering and measuring three oscillation parameters with sub-percent precision. To reach these goals, JUNO is located about 53$\,$km from two nuclear power plants and will detect electron antineutrinos from reactors through inverse beta decay. Furthermore, an unprecedented energy resolution of 3$\,$% at 1$\,$MeV is required. The JUNO detector consists of 20$\,$kton of liquid scintillator contained in a 35.4$\,$m diameter acrylic vessel, which is instrumented with a system of about 18$\,$000 20-inch Large-PMTs and 25$\,$600 3-inch small-PMTs, with a total photocoverage greater than 75$\,$%.
The front-end electronics for the Large-PMT system consists of a Global Central Unit (GCU), which performs the analog-to-digital conversion of the waveforms a few meters away from the PMT, thus providing good performance in terms of signal-to-noise ratio. The mass production of the Large-PMT electronics is currently ongoing in Kunshan, China. At the production site, several tests are performed to assess the integrity and the performances of the GCUs; the integration with the back-end electronics is also tested.
This contribution will focus on the test protocol that has been developed for the mass testing of the Large-PMT electronics at Kunshan. Results of the tests will also be presented.
Neutrino electromagnetic properties are of great importance from the point of view of fundamental theory, as well as from the point of view of applications. It is of common knowledge that neutrinos determine to a large extent the dynamics of supernova explosion. In this work we study the effect of matter polarized by external magnetic field on neutrino spin evolution and propagation inside supernovae. Alternatively, the problem of neutrino interaction with such kind of matter can be treated as the interaction of the induced neutrino magnetic moment (IMM) with the magnetic field. Using the corresponding interaction Lagrangian we obtain the effective evolution equation for a neutrino with IMM and on its basis consider neutrino spin oscillations for different cases of Dirac/Majorana neutrino type, absence/presence of neutrino anomalous magnetic moment (AMM). It is shown that due to IMM the neutrino flux from a supernova undergoes additional attenuation. Also, the effects of IMM and AMM when taken together can cancel each other leading to a specific maximum in the neutrino spectrum from supernovae.
Based on:
A.Grigoriev, E. Kupcheva, A.Ternov, Neutrino spin oscillations in polarized matter, Phys.Lett.B 797 (2019) 134861,
e-Print: 1812.08635 [hep-ph]
The cross sections of deep inelastic scattering processes at the electron-proton collider HERA are a well established tool to test perturbative QCD predictions. Additionally, they can be used to determine the non-perturbative parton distribution functions of the proton. Measurements of jet production cross sections are particularly well suited to also constrain the strong coupling constant. A new measurement of inclusive jet cross sections in neutral current deep inelastic scattering using the ZEUS detector at the HERA collider is obtained. The data were taken at HERA II at a center of mass energy of 318GeV and correspond to an integrated luminosity of 344 pb$^{โ1}$. Massless jets, reconstructed using the $k_T$-algorithm in the Breit reference frame, are measured as a function of the squared momentum transfer $Q^2$ and the transverse momentum of the jets in the Breit frame $p_{\rm T,Breit}$. The measured jet cross sections are compared to previous measurements as well as NNLO theory predictions. The consistency of the measurement is confirmed by a simultaneous determination of parton distribution functions and the strong coupling constant in a QCD analysis.
Two-particle normalized cumulants of particle number correlations ($R_{2}$) and transverse momentum correlations ($P_{2}$) measured as a function of relative pseudorapidity and azimuthal angle difference $(\Delta\eta, \Delta\varphi)$ provide key information about particle production mechanism, diffusivity, charge and momentum conservation in high-energy collisions. To complement the recent ALICE measurements in Pb--Pb collisions, as well as for better understanding of the jet contribution and nature of collectivity in small systems, we measure these observables in pp collisions at $\sqrt{\textit{s}}$ = 13 TeV with similar kinematic range, 0.2 $<$ $\textit{p}_{\rm T}$ $\leq$ 2.0 $\rm{GeV}/\textit{c}$. The near-side and away-side correlation structures of $R_{2}$ and $P_{2}$ are qualitatively similar, but differ quantitatively. Additionally, a significantly narrower near-side peak is observed for $P_{2}$ as compared to $R_{2}$ for both charge-independent and charge-dependent combinations like in the recently published ALICE results in p--Pb and Pb--Pb collisions. Being sensitive to the interplay between underlying event and mini-jets in pp collisions, these results not only establish a baseline for heavy-ion collisions but also allow one to understand better signals which resemble collective effects in small systems.
Multiple and concurring evidences reveal that the vast majority of the matter content of the universe is non baryonic and electrically neutral. This component is usually called Dark Matter (DM), for its lack of electromagnetic interactions, and is measured to constitute the 25% of the content of the Universe. The Dark Matter origin and nature is one of the most intriguing puzzle still unresolved, however the most common hypothesis is that it consists of weakly interacting massive particles (WIMPs), supposed to be cold thermal relics of the Big-Bang.
The indirect detection of DM is based on the search of the products of DM annihilation or decay. They should appear as distortions in the gamma rays spectra and or in anomalies in the rare Cosmic Ray (CR) components. In particular antimatter components, like antiprotons, antideuterons and positrons, promise to provide sensitivity to DM annihilation on the top of the standard astrophysical production.
The galactic cosmic rays span an energy range from about tens of MeV up to hundreds of TeV, and include nuclei from proton to iron and nichel, antiprotons, leptons and gamma-rays. The interpretation of galactic cosmic ray data requires, as well as the correct modelling of their sources and of the turbulence spectrum of the galactic magnetic field, also the knowledge of the cross sections that regulate the production and destruction of cosmic rays interacting with the interstellar medium.
For many production and inelastic cross sections, data are scarce or definitely missing.
In particular, the antiprotons in the Galaxy are of secondary origin and produced by the scattering of cosmic proton and helium nuclei off the hydrogen and helium in the interstellar medium.
The only measured production cross section is the proton-proton one, while all the reactions involving helium have no laboratory data in the useful antiproton energy range ( 0.1-100 GeV). The empirical modelling of those cross sections induces an uncertainty in the antiproton flux of about 30-40%. This should be compared with the 10% accuracy of the AMS-02 high-precision data on the antiproton flux.
A dedicated measurement campaign aimed at measuring the exclusive cross section p + He, with particular interest for the channel antiproton + X, is crucial for the search of DM signals in the spectra of antiprotons in cosmic rays.
While some experimental datasets on p-p collisions are available, the very first dataset on p-He collision was collected in 2016 by the LHCb experiment at 6.5 TeV.
The AMBER fixed target experiment at CERN would contribute to this fundamental DM search, performing a unique and complementary measurement with proton beam of few hundreds of GeV/c impinging on a LHe target. The proposed experiment aims to measure the double differential antiproton production cross-section for proton beam energies supplied by the M2 beam line (20 โ 280 GeV/c), which will provide a complementary input to the LHCb TeV-scale results.
A programme for experimental determination of the antiproton production cross-section in p+4He scattering is included in the first phase of the AMBER experiment which was approved by CERN in 2020 and is scheduled to run from 2023 onward.
Properties of 8โ silicon-sensor prototypes for the CMS High Granularity Calorimeter (HGCAL) have been studied by measuring the leakage current and depletion voltage, before and after irradiation, at CERN.
A semi-automated measurement setup, called PM8, and a fully-automated setup, called ALPS (Automatic Low-temperature Probe Station) have been developed at CERN for this purpose.
Similar measurements have also been made at Florida State University (USA). Sensors with different properties (thickness, oxide quality, pstopโฆ), supplied by Hamamatsu, have been characterized. Some well-behaved sensors were irradiated up to the fluence expected in CMS at the end of HL-LHC, at the Rhode Island Nuclear Science Center (RINSC), and, in addition to the IV/CV behaviours, the annealing behaviour was also studied.
The results of this measurement campaign have contributed to the choice of the properties for the ongoing sensor pre-series, which will undergo large-scale testing before launching the full production of nearly 30000 sensors.
The identification of jets containing b-hadrons, b-tagging, plays an important role in many physics analyses in ATLAS. Several different machine learning algorithms have been deployed for the purpose of b-tagging. These tagging algorithms are trained using Monte-Carlo simulation samples, as such their performance in data must be measured. The b-tagging efficiencies (epsilon_b) have been measured in data using ttbar events in the past and this work presents the measurements in multijet events using data collected by the ATLAS detector at sqrt{s}=13 TeV for the first time. This offers several key advantages over the ttbar based calibrations, including a higher precision at low jet pT and an ability to perform measurements of epsilon_b at significantly higher jet pT. Two approaches are applied and for both a profile likelihood fit is performed to extract the number of b-jets in samples passing and failing a given b-tagging requirement. The b-jets yields are then used to determine epsilon_b in data and from that scale factors to the efficiency measured in MC. The two approaches differ primarily in the discriminating variable used in the fit. At low jet pT the variable pT_rel is used, while for high jet pT the signed impact parameter significance is used. Both calibrations give measurements of the scale factors as a function of the jet pT.
This poster will present a measurement of the Higgs boson mass in the four-lepton decay channel using 139 $\rm fb^{-1}$ of proton-proton collision data at the Large Hadron Collider recorded by the ATLAS detector, corresponding to the full Run 2 dataset. For a Higgs boson with $m_H$ = 125 GeV, the expected total (stat) uncertainty is 181 MeV (178 MeV).
A measurement of the top quark pole mass in events where a top quark-antiquark pair ($\text{t}\overline{\text{t}}$) is produced in association with one additional jet is presented. This analysis is performed using proton-proton collision data at 13 TeV collected by the CMS experiment at the CERN LHC, corresponding to a total integrated luminosity of 36.3 fb$^{-1}$. Events with two opposite charge leptons in the final state (ee, $\mu\mu$, e$\mu$) are analyzed. Using multivariate analysis techniques based on machine learning, the reconstruction of the main observable and the event selection are optimized. The production cross section is measured as a function of the inverse of the invariant mass of the $\text{t}\overline{\text{t}}$+jet system at the parton level, using a maximum likelihood unfolding. The top quark pole mass is extracted using the theory predictions at next-to-leading order.
The particle production at very forward region is described by using phenomenological models in simulators of hadronic interactions. Because of lack of experimental data at high energies, there are differences of predictions of particle production cross-section between hadronic interaction models. The interaction models are a mandatory tool for simulating air showers induced by cosmic-rays and the improvement of models is required to perform precise observation of high energy cosmic-rays.
The RHIC forward (RHICf) experiment performed an operation with proton-proton collisions at โs = 510 GeV in June 2017. In this presentation, we present the analysis result of differential cross-section measurement of photons in the pseudorapidity range of more than 6.1. The data were compared with predictions of four hadronic interaction models to test these models. In addition, Feynman scaling law was tested by comparing with the LHCf result at $\sqrt{s}$ = 7 and 13 TeV, and it was confirmed within the errors.
We present a prospect study on di-Higgs production in the HH to bbyy decay channel with the ATLAS experiment at the High Luminosity LHC (HL-LHC). The results are obtained by extrapolating the results from the Run 2 measurement, with 139/fb of data at a center-of-mass energy of 13 TeV, to the conditions expected at the HL-LHC. While there is no sign of di-Higgs production with the current LHC dataset, the much higher luminosity (3000/fb) and energy (14 TeV) at the HL-LHC will enable a much better measurement of this important process. We describe in detail the extrapolation process and assumptions, and multiple scenarios for the treatment of systematic uncertainties at the HL-LHC are considered. Under the baseline systematic uncertainty scenario, the extrapolated precision on the Standard Model di-Higgs signal strength measurement is 50%, corresponding to a significance of 2.2 sigma. The extrapolated 1 sigma confidence interval from a measurement of kLambda, the trilinear Higgs boson self-coupling modifier, is [0.3, 1.9].
Charge-dependent azimuthal anisotropy Fourier coefficients are measured with two- and three-particle correlations in pPb and PbPb collisions. The difference between positively and negatively charged particles for the second-order two-particle $(v_2\{2\})$ and three-particle $(v_2\{3\})$ coefficients for both pPb and PbPb, and third order two-particle coefficient $(v_3\{2\})$ for PbPb, are presented. The observed results are challenging the hypothesis that attributes charge-dependent azimuthal correlations in heavy ion collisions to the chiral magnetic effect. In addition, the two-particle electric charge balance function is used as a probe to study the charge creation mechanism in high energy heavy ion collisions, for the first time in CMS. The balance function is constructed using like and unlike charged-particle pairs. The width of the balance function, both in relative pseudo-rapidity and relative-azimuthal angle, increases from more central collisions to peripheral ones. Narrowing and widening of these widths indicate late and early hadronization, respectively.
Proton-ion collisions at the LHC and RHIC have yielded unexpected trends, notably in measurements of jet nuclear modification factors as a function of event activity (EA). Recent preliminary measurements from STAR in p+Au collisions at $\sqrt{s_{{\rm NN}}}=200$ GeV demonstrate inherent correlations between high-$Q^{2}$ parton scatterings and EA measured at backward (Au-going) rapidities or underlying event (UE) at mid-rapidity. The measurements at STAR disfavor jet quenching as an explanation for the suppression of jet yield observed in high-EA collisions. This leads to an opportunity to probe the early stages of the proton ion collisions. In this talk, we show correlations of backward-rapidity EA with mid-rapidity UE, as well as measurements of EA-dependent modifications to charged hadron spectra and jets. In particular, we present measurements of the UE for various EA selections and discuss its kinematic dependence on jet pseudorapidity ($\eta$) and transverse momentum ($p_{{\rm T}}$) as a means of examining the correlation between initial hard scatterings and soft processes. We also investigate the EA dependence of high-$p_{{\rm T}}$ hadron and jet properties---including fully corrected ungroomed and SoftDrop groomed jet substructure observables---to study the impact of initial and final state effects.
Measurements of multiboson production at the LHC probe the electroweak gauge structure of the Standard Model for contributions from anomalous couplings. Of particular significance are processes involving quartic gauge boson couplings. In this talk we present recent ATLAS results on the measurement of electroweak production of a Zgamma pair in association with two jets when Z decays to neutrinos producing missing transverse energy. If available, the production of same-sign pair of W bosons will be presented. Moreover, several measurements including three gauge bosons in the final state will be discussed such as the first observation of the three W bosons and the production of Z boson and two photons. Finally, vector boson scattering measurements interpreted in a combined Effective Field Theory analysis of anomalous quartic gauge self-interaction will be shown.
Coalescence is one of the main models used to describe the formation of light(anti)nuclei. It is based on the hypothesis that two nucleons close in phase space can coalesce and form a nucleus. Coalescence has been successfully tested in hadron collisions at colliders, from small (pp collisions) to large systems (Au-Au collisions). However, in Monte Carlo simulations (anti)nuclear production is not described by event generators. A possible solution is given by the implementation of coalescence afterburners, which can describe nuclear production on an event-by-event basis. This idea would find application in astroparticle studies, allowing for the description of (anti)nuclear fluxes in cosmic rays, which are crucial for indirect Dark Matter searches. In this presentation, the implementation of an event-by-event coalescence afterburner based on a state-of-the-art Wigner approach is discussed. The results here shown are obtained with the EPOS 3 event generator and compared to the measurements performed in pp collisions at the LHC. In particular, the role of the emitting source in the coalescence process is discussed, comparing the results obtained using the direct measurement of the source size with the semi-classical traces implemented in EPOS 3.
The LHC high luminosity upgrade will result in about 200 proton-proton collisions in a typical bunch crossing. To cope with expected unprecedented occupancy, bandwidth, and radiation damage, the ATLAS Inner Detector will be replaced with an all-silicon system, the Inner Tracker (ITk). The innermost part of the ITk will be equipped with pixel modules, consisting of pixel sensors and novel ASICs, implemented in 65 nm CMOS technology. Several types of modules will be used in the pixel ITk detector. Prototype modules assembled with RD53A chips are being built to evaluate their production rate, thermal and electrical performance, and performance before and after irradiation. In this contribution the assembly process and tooling is described and the first results on the modules performance are presented.
Jets are collimated sprays of hadrons and serve as an experimental tool for studying the dynamics of quarks and gluons. In particular, differential measurements of jet substructure enable a systematic exploration of the parton shower evolution. The SoftDrop grooming technique utilizes the angular ordered Cambridge/Aachen reclustering tree and provides a correspondence between the experimental observables, such as the shared momentum fraction $(z_{\rm{g}})$, groomed jet radius or split opening angle $(R_{\rm{g}})$, and the QCD splitting functions in vacuum. We present fully corrected correlations between $z_{\rm{g}}$ and $R_{\rm{g}}$ at the first split for jets of varying momenta and radii in $pp$ collisions at $\sqrt{s} = 200$ GeV. To study the evolution along the jet shower, we also present the splitting observables at the first, second and third splits along the jet shower for various jet and initiator prong momenta. As these novel measurements are presented in three dimensions, we outline the correction procedure so that it can be used as a template for future multi-differential measurements across all experiments.
Multicritical-Point Principle (MPP) provides a natural scenario to explain the large hierarchy between the Planck scale and the electroweak scale via the Coleman-Weinberg mechanism. We discuss a minimal model to realize such a scale generation, where two real scalar fields are added to the Standard Model and one of them can be a dark matter candidate. We show that the successful scenario, explaining the relic abundance of dark matter and direct search experiments, is given to be well predicted regions of the parameter space. We also demonstrate that the first order phase transition can be realized in such a scenario at TeV scale and it predicts stochastic gravitational waves which could be detected by future space-based experiments e.g., DECIGO and/or BBO.
This talk is based on JHEP 01 (2021) 087, arXiv:2008.08700 [hep-ph] and arXiv:2202.07784 [hep-ph].
Measurements of jet fragmentation and jet properties in pp collisions provide a test of perturbative quantum chromodynamics (pQCD) and form a baseline for similar measurements in heavy ion (A-A) collisions. In addition, jet measurements in p-A collisions are sensitive to cold nuclear matter effects. Recent studies of high-multiplicity final states of small collision systems exhibit signatures of collective effects that could be associated with hot and dense, color-deconfined QCD matter, which is known to be formed in collisions of heavier nuclei. The modification of the jet fragmentation pattern and jet properties is expected in the presence of such QCD matter. Measurements of jet fragmentation patterns and other jet properties in p-A collisions are needed in order to establish whether deconfined QCD matter is indeed generated in such small systems. In this contribution we report recent ALICE measurements of charged-particle jet properties, including mean charged-constituent multiplicity and fragmentation distribution for leading jets, in minimum bias p-Pb collisions at $\sqrt{s}$ = 5.02 TeV and minimum bias pp collisions at $\sqrt{s}$ = 13 TeV. In addition, the multiplicity dependence of these jet properties in pp collisions at $\sqrt{s}=13~\rm{TeV}$ will also be presented. Results will be compared with theoretical model predictions.
Hadronic resonances are effective tools for studying the hadronic phase in ultrarelativistic heavy-ion collisions. In fact, their lifetime is comparable to the hadronic phase and resonances are sensitive to the hadronic phase effects such as rescattering and regeneration processes which might affect the resonance yields and shape of the transverse momentum spectra. $\Lambda(1520)$ has a lifetime of around 13 fm/$\it{c}$, which lies in between the lifetimes of $K^*$ and $\phi$ resonances. The resonance to stable particle yield ratios can be used to study the properties of the hadronic phase. Recently, ALICE observed the suppression of the $\Lambda(1520)/\Lambda$ ratio in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV as a function of centrality. It is therefore interesting to investigate the multiplicity-dependent study of $\Lambda(1520)/\Lambda$ ratio for pp collisions, since this can serve as a baseline for heavy-ion collisions.
In this contribution, we present new results on the measurement of the baryonic resonance $\Lambda(1520)$ as a function of the charged-particle multiplicity in pp collisions at $\sqrt{s}$ = 5.02 and 13 TeV. The transverse momentum spectrum, the integrated yield $(\rm d \it N/ \rm d \it y )$, the mean transverse momentum $(\langle p_{\rm{T}}\rangle)$ and the $ \Lambda(1520)/\Lambda$ yield ratio will be presented as a function of the charged-particle multiplicity.
China Jinping Underground Laboratory (CJPL) is ideal for carrying out MeV-scale neutrino experiments and searching for neutrinoless double-beta-decay. To understand the cosmogenic background, we analyzed 820.28 days of the dataset from a one-ton prototype detector and measured the cosmic-ray muon flux to be $(3.61 \pm 0.19_{\rm stat.} \pm 0.10_{\rm sys.}) \times 10^{-10} {\rm cm}^{-2}{\rm s}^{-1}$. From the detected cosmic-ray muon events, we also measured the muon-induced neutron yield in liquid scintillation, which is $ (3.44 \pm 1.86_{\rm stat.} \pm 0.76_{\rm sys.}) \times 10^{-4} \mu^{-1} {\rm g}^{-1}{\rm cm}^2 $ at 340 GeV average energy of muons. In addition, we performed a survey of muon fluxes at different laboratory locations globally, considering both those situated under mountains and those down mine shafts. Under the same vertical overburden, the former is generally $(4 \pm 2)$ times the latter due to the leakage through the mountain. Based on Jinping Mountain's terrain and the measurement in CJPL-I, we predicted cosmic-ray muons' energy and angle distributions and fluxes for the four halls at CJPL-II. We found the fluxes of Hall C and Hall D were about $2.3 \times 10^{-10} {\rm cm}^{-2}{\rm s}^{-1}$ and $2.5 \times 10^{-10} {\rm cm}^{-2}{\rm s}^{-1}$ respectively.
The long-existing problem of neutrino mass and mixing can be connected to cosmological phenomena, such as the leptogenesis and the existence of dark matter (DM). In the extension of the type I seesaw model with two right-handed (RH) neutrinos, the seesaw Yukawa can drive the DM production, even with the competition from gravitational effect and constraints from leptogenesis. However, the DM production driven by the seesaw Yukawa is not compatible with the testability of the traditional type I seesaw model, which motivates us to seek a variation. By considering two Higgs doublets, a new type Ib seesaw model is proposed, which can explain the neutrino mass, dark matter and leptogenesis simultaneously while keeping its testability. Moreover, the type Ib seesaw model allows a different approach to dark matter production and stability through a $U(1)'$ extension.
Axions and axion-like particles (ALPs) are some of the most popular candidates for dark matter [1]. Axions are also considered [2] as new physics contributions to the muon g โ 2 . Following the existed interest to ALPs we consider interaction between neutrinos and hypothetical axion-like particles and derive for the first time the probability of neutrino oscillations accounting for their interactions mediated by ALPs. The corresponding effective mixing angle is derived for the cases of Dirac and Majorana neutrinos.
[1] Ciaran A. J. O'Hare, Giovanni Pierobon, Javier Redondo, Yvonne Y.Y. Wong, Simulations of axion-like particles in the post-inflationary scenario, arXiv:2112.05117.
[2] M.A. Buen-Abad, J. Fan, M. Reece, Ch.Sun, Challenges for an axion explanation of the muon g โ 2 measurement, J. High Energ. Phys. 2021, 101 (2021).
We study the capability of INO-ICAL to determine the atmospheric neutrino oscillation parameters. We do not use any generator level information but instead use only the output of GEANT4 simulation of the atmospheric neutrino events in the detector. In a similar previous study, by other authors, only the momentum and direction of the longest track were used. In this study, we consider a third variable based on additional hits, which arise due to hadrons in the event. We show that the inclusion of the third variable leads to a 30% reduction in the uncertainties of |โ$_{31}$| for a five-year run of ICAL. We find that doubling the exposure time leads to a 30% reduction in the uncertainties of both |โ$_{31}$| and ๐ ๐๐$^2$๐$_{23}$.
It is believed that the running (for instance, COHERENT) and forthcoming terrestrial neutrino experiments will be sensitive to the neutrino charge radius [1] that is one of the neutrino fundamental electromagnetic characteristics [2] predicted [3] to be non-zero even in the Standard Model. In this work we continue our studies [4] on neutrino oscillations in an environment with large electric currents accounting for the diagonal and non-diagonal neutrino charge radii and anapole moments. We derive neutrino evolution equation in moving matter with electric currents and consider spin and spin-flavor neutrino oscillations. We also study conditions for possible resonances of neutrino oscillations engendered by the neutrino charge radii and compare these conditions with real astrophysical environments.
[1] M.Cadeddu, F.Dordei, C.Giunti, K.Kouzakov, E.Picciau, A.Studenikin,
Phys. Rev. D 100 (2019) 073014.
[2] C.Giunti, A.Studenikin, Rev. Mod. Phys. 87 (2015) 531.
[3] J.Bernabeu, L.G.Cabral-Rosetti, J.Papavassiliou, J.Vidal, Phys.Rev.D 62 (2000) 113012.
[4] K.Kouzakov, F.Lazarev, V.Shakhov, K.Stankevich, A.Studenikin,
PoS ICHEP2020 (2021) 217.
We utilize $A_4$ modular symmetry in supersymmetric context to explore type-III seesaw mechanism. Our work includes extra local $U(1)_{B-L}$ symmetry, which helps us to avoid some undesirable terms in the superpotential. As the seesaw being type-III, it involves fermion triplet superfields $\Sigma$ and we have also included weighton singlet field ($\rho$), which gets VEV $(v_{\rho})$ after $U(1)_{B-L}$ symmetry breaking. Therefore a $Z^\prime$ comes into picture which participates in contribution to $(g-2)_\mu$. A crucial role here is played by modular symmetry that are expressed in preventing the use of excess fields. As well, the Yukawa couplings develop modular forms expressed interms of dedekind eta function $\eta(\tau)$, $\tau$ being a complex variable in the upper half plane. Therefore, matching of neutrino oscillation data with experiments in its 3$\sigma$ range, predicts the validity of our model. Moreover, we have briefly discussed leptogenesis and muon anomalous magnetic moment.
The proposed work is an extension of Standard Model where we have taken three right-handed heavy neutrinos $(N _{iR})$, three neutral fermions $(S_{iL}) (i=1,2,3)$ to follow inverse seesaw mechanism. Extra two scalar singlets have been introduced to give tiny masses to active neutrinos. Quantum numbers of the particles are taken in such a way that the model can be anomaly free under two local gauge symmetries $U_{B-L}$ and $U(1)_{L_e-L_\mu}$ for explaining neutrino phenomenology successfully. We have also discussed about neutrinoless double beta decay effective mass $(\langle m_{ee} \rangle)$ which has the value well below the current experiment bounds of KamLAND, CUORE etc. The non-unitarity nature of active neutrino mixing matrix has been commented in the work. We have also discussed about electron and muon anomalous magnetic moment with the help of two MeV range gauge bosons $(Z_1$ and $Z_2)$ through neutral current interactions.
Superweak force is a U(1) extension of the standard model, which in addition to accompanying neutral gauge boson, adds three massive sterile Majorana neutrinos and a complex singlet scalar to the particle zoo. It aims to explain dark matter, accelerating expansion of the universe, neutrino mass generation, vacuum metastability, cosmic inflation and baryonic asymmetry of the universe. In the talk I will discuss the neutrino phenomenology of this model. The model exhibits suppressed nonstandard neutrino interactions and potential to discover the disappearance of active neutrino flavours to sterile flavours via future experiments, such as Faser, NA62, SHiP and MATHUSLA. In addition, I will discuss the sub-leading corrections to neutrino masses arising from one-loop contribution to light neutrino self-energies.
The rapid development of neutrino astronomy, which is expressed, among other things, in the emergence of new neutrino mega-projects capable of effectively registering astrophysical neutrino fluxes requires a detailed knowledge of a neutrino evolution inside the neutrino sources (type II supernovae, gamma-ray bursts). The evolution can be influenced by many factors each should be accounted for by the relevant theory. In this work, we develop the theory of neutrino propagation in moving and/or polarized matter by introducing for the first time the exact spin integral of motion. This enables us to obtain the neutrino dispersion under these conditions and to discuss the features of the neutrino motion. Our approach opens up the possibility of consistent classification of neutrino states in moving and/or polarized medium and, as a consequence, a systematic description of the related physical phenomena (e.g., neutrino oscillations, neutrino electromagnetic radiation).
Based on:
A.Grigoriev, A.Studenikin, A.Ternov, Neutrino spin operator and dispersion in moving matter,
e-Print: 2111.10449 [hep-ph],
accepted for publication in Eur. Phys. J. C
We discuss the contribution of right-handed neutrinos (RHNs) to the neutrinoless double beta decay within the minimal type-I seesaw model by virtue of the intrinsic seesaw relation of neutrino mass and mixing parameters, and the mass dependence of the nuclear matrix elements from different nuclear models. In the viable parameter space, we find the possibilities of both the enhancement and cancellation to the effective neutrino mass from RHNs. The bounds on the parameter space of the RHNs can be obtained from the latest neutrinoless double beta decay experiments, and can be compared with other experimental probes.
This work is based on the preprint 2112.12779 and a new work in preparation.
Particulate dark matter captured by a population of neutron stars distributed around the galactic center while annihilating through long-lived mediators can give rise to an observable neutrino flux. We examine the prospect of an idealised gigaton detector like IceCube/KM3NeT in probing such scenarios. Within this framework, we report an improved reach in spin-dependent and spin-independent dark matter nucleon cross-section below the current limits for dark matter masses in the TeV-PeV range.
The T2K experiment is a long-baseline accelerator neutrino experiment in Japan that measures the leptonic CP-violating phase $\delta_{CP}$ by studying $\nu_e$ appearance from the $\nu_\mu$ beam at T2K's far detector, Super Kamiokande (SK). The near detector (ND280) stands 280 metres, and SK stands 295 km away from the beam production target. SK is a 50 kton water-Cherenkov detector that observes Cherenkov rings from charged particles produced in neutrino interactions with water.
Both single and multi-ring samples for $\nu_\mu$ at SK are used in T2K's latest oscillation analyses, while for $\nu_e$, only single-ring samples are used. Charged current single $\pi^+$ events form the second most dominant signal events in $\nu_e$ appearance studies, of which events with $\pi^+$ below Cherenkov threshold are used in the latest analysis (1 e-like ring and a decay electron signature). The addition of the sample with $\pi^+$ above the Cherenkov threshold, consisting of an e-like ring and a $\pi^+$-like ring can increase the statistics of $\nu_e$ events and thus our sensitivity to $\delta_{CP}$. In this poster, I will discuss the cuts-based selection of these 2-ring $\nu_e$ CC$1\pi^+$ events, the backgrounds that impact the selection, and the cut optimization.
The ability to identify jets containing b-hadrons (b-jets) is of essential importance for the scientific programme of the ATLAS experiment, underpinning the observation of the Higgs boson decay into a pair of bottom quarks, Standard Model precision measurements, and searches for new phenomena. The ATLAS flavour tagging algorithms rely on powerful multivariate and deep machine learning techniques. These algorithms exploit tracking information and secondary vertex reconstruction in jets to establish the jet's flavour. Both specifically designed observables sensitive to the distinct properties of b-jets and neural networks operating directly on the charged-particle tracks within the jet are used. In this poster, we review the state-of-the-art in flavour tagging algorithms developed by the ATLAS collaboration and of their expected performance using simulated data.
The associated production of a single-top with opposite-sign same-flavor (OSSF) di-leptons, $pp \to t \ell^+ \ell^-$ and $ pp \to t \ell^+ \ell^- + j$ ($j=$light jet), can lead to striking tri-lepton $pp \to \ell^\prime \ell^+ \ell^- + X$ and di-lepton $pp \to \ell^+ \ell^- + j_b + X$ ($j_b=b$-jet) events at the LHC (after the top decays). Although these rather generic multi-lepton signals are flavor-blind, they can be generated by new 4-Fermi flavor changing (FC) $u_i t \ell \ell$ scalar, vector and tensor interactions ($u_i \in u,c$), which I will consider in this talk; the FC $u_i t \ell \ell$ 4-Fermi terms are matched to the SMEFT operators and also to different types of FC underlying heavy physics. The main backgrounds to these di- and tri-lepton signals, from $t \bar t$, $Z$+jets and $VV$ ($V=W,Z$) production, can be essentially eliminated with a sufficiently high invariant mass selection on the OSSF di-leptons, $m_{\ell^+ \ell^-}^{\tt min}(OSSF) > 1$ TeV and the use of $b$-tagging as an additional selection in the di-lepton final state. I will discuss the sensitivity of the LHC to the scale of the scalar, tensor and vector $u t \mu \mu$ interactions, based on the current $\sim 140$ fb$^{-1}$ accumulated luminosity and at a future HL-LHC. I will furthermore discuss the possible implications of this class of FC 4-Fermi effective interactions on lepton non-universality tests at the LHC.
We present the new simulation model of channeling of electrons and positrons implemented into Geant4. Geant4 [1] is a toolkit for the simulation of the passage of particles through matter. Channeling effect [2] is the effect of the penetration of charged particles through a monocrystal parallel to its atomic axes or planes. Coulomb scattering introduced in the model and based on the CRYSTALRAD [3] code makes it possible to simulate complicated trajectories of channeling electrons and positrons.
We present a Geant4 simulation example of an experimental setup including channeling physics inside and standard physics outside the crystal volume. We validate the model with the experimental data and CRYSTALRAD simulations. We discuss the following possible applications of our channeling model: beam steering, crystal-based extraction/collimation of leptons and hadrons in an accelerator, a fixed-target experiment on magnetic and electric dipole moment measurement, X-ray and gamma radiation source for radiotherapy and nuclear physics and a positron source for lepton and muon colliders.
[1] J.Allison et al., NIM A 835, 186-225 (2016).
[2] J. Lindhard, Mat. Fys. Medd. Dan. Vid. Selsk. 34 (14), 64 (1965).
[3] A. I. Sytov, V. V. Tikhomirov, and L. Bandiera, PRAB 22, 064601 (2019).
A. Sytov is supported by the European Commission (TRILLION, GA. 101032975). We acknowledge partial support of the INFN through the MC-INFN project and the CINECA award under the ISCRA initiative
In this work, we study the new physics effects arising due to the presence of anomalous Wtb vertex through the semileptonic decay modes of the top-quark at the Large Hadron Collider. An estimate of the sensitivities of the aforementioned interaction at 5 sigma CL in the context of top-quark decay-width measurements and cross-section measurements would also be discussed for the pre-existing 13 TeV LHC data and its projections for the proposed LHC runs at 14 TeV, 27 TeV and 100 TeV. We also incorporate the CP-violating effects to such interactions by constructing the CP-violating asymmetries.
The flavor changing neutral b decays with di-leptons and di-neutrinos in the final state provide a great platform to explore physics beyond the standard model(SM). The recent measurements predicted by LHCb on $R_K$, $R_{K_S}$, $R_{K*+}$, $\mathcal{B}(B_s\to \phi \mu^{+}\mu^{-})$ and $\mathcal{B}(B_s\to \mu^{+}\mu^{-})$ proceeding via $b \to s \ell^{+}\ell^{-}$ quark level transitions show a significant deviation from the standard model expectations. Very recently, Belle II collaboration reported a more precise upper bound of the branching fraction of $\mathcal{B}(B\to K^+\nu\bar{\nu}) < 4.1\times 10^{-5}$ by employing a new inclusive tagging approach. The $b\to s \ell^{+}\ell^{-}$ and $b\to s\nu\bar{\nu}$ decay channels are related in the SM as well as in beyond the SM physics. In the beyond SM physics, they are related via $SU(2)_L$ gauge symmetry and can be studied simultaneously in a model independent standard model effective field theory(SMEFT) approach. Moreover, $b\to s\nu\bar{\nu}$ decay channels are theoretically cleaner than the corresponding $b \to s \ell^{+}\ell^{-}$ decays due to the absence of non factorizable corrections and photonic penguin contributions. In this context, we perform a combined analysis of $\Lambda_b\to \Lambda^{(*)}\mu^{+}\mu^{-}$ and $\Lambda_b\to \Lambda^{(*)} \nu\bar{\nu}$ decay modes and study the implication of $b \to s \ell^{+}\ell^{-}$ anomalies in a model independent SMEFT approach. We give predictions of several physical observables within SM and within several new physics scenerios.
The main aim of this paper is to present new sets of non-perturbative fragmentation functions (FFs) for D^0D
0
and D^+D
+
mesons at next-to-leading (NLO) and, for the first time, at next-to-next-to-leading order (NNLO) in the \overline{\mathrm {MS}}
MS
factorization scheme with five massless quark flavors. This new determination of FFs is based on the QCD fit to the OPAL experimental data for hadron production in the electron-positron single-inclusive annihilation (SIA). We discuss in detail the novel aspects of the methodology used in our analysis and the validity of obtained FFs by comparing with previous works in literature which have been carried out up to NLO accuracy. We will also incorporate the effect of charmed meson mass corrections into our QCD analysis and discuss the improvements upon inclusion of these effects. The uncertainties in the extracted FFs as well as in the corresponding observables are estimated using the โHessianโ approach. For a typical application, we use our new FFs to make theoretical predictions for the energy distributions of charmed mesons inclusively produced through the decay of unpolarized top quarks, to be measured at the CERN LHC. As a result of this analysis, suggestions are discussed for possible future studies on the current topic to consider any theory improvements and other available experimental observables
If massive neutrinos are Majorana particles, then the lepton number should be violated in nature and neutrino-antineutrino oscillations $\nu^{}_\alpha \leftrightarrow \overline{\nu}^{}_\beta$ (for $\alpha, \beta = e, \mu, \tau$) will definitely take place. In the present paper, we study the properties of CP violation in neutrino-antineutrino oscillations with the non-unitary leptonic flavor mixing matrix, which is actually a natural prediction in the canonical seesaw model due to the mixing between light and heavy Majorana neutrinos. The oscillation probabilities $P(\nu^{}_\alpha \to \overline{\nu}^{}_\beta)$ and $P(\overline{\nu}^{}_\alpha \to \nu^{}_\beta)$ are derived, and the CP asymmetries ${\cal A}^{}_{\alpha \beta} \equiv [P(\nu^{}_\alpha \to \overline{\nu}^{}_\beta) - P(\overline{\nu}^{}_\alpha \to \nu^{}_\beta)]/[P(\nu^{}_\alpha \to \overline{\nu}^{}_\beta) + P(\overline{\nu}^{}_\alpha \to \nu^{}_\beta)]$ are also calculated. Taking into account current experimental bounds on the leptonic unitarity violation, we show that the CP asymmetries induced by the non-unitary mixing parameters can significantly deviate from those in the limit of a unitary leptonic flavor mixing.
Collider searches for dark matter (DM) so far have mostly focussed on scenarios where DM particles are produced in association with heavy standard model (SM) particles or jets. However, no deviations from SM predictions have been observed. Several recent phenomenology papers have proposed models that explore the possibility of accessing the strongly coupled dark sector, giving rise to unusual and unexplored collider topologies. One such signature is termed as semi-visible jet (svj), where parton evolution includes dark sector emissions, resulting in jets interspersed with DM particles. Owing to the unusual MET-along-the-jet event topology this is still a largely unexplored domain within LHC. This talk presents the first results from a search for svj in the t-channel production mode in pp collisions for an integrated luminosity of 139 fb$^{-1}$ at centre-of-mass energy 13 TeV at the LHC, based on data collected by the ATLAS detector during 2015-2018.
No analysis in ATLAS or CMS has so far searched for FCNC decays of top quarks into a new scalar (X) in a broad mass range probing branching ratios below $10^{-3}$. In the case of the Higgs boson, branching ratios $t\rightarrow H+u/c$ are predicted within the SM to be of about $O(10^{-17})/O(10^{-15})$. Several beyond-SM theoretical models predict new particles and enhanced branching ratios. In particular, simple SM extensions involve the Froggatt-Nielsen mechanism, which introduces a scalar field with flavour charge, the so-called flavon, featuring flavour violating interactions. Using the full Run 2 data, ATLAS has performed a search for a scalar of a mass in the range between 20 and 160 GeV and decaying into a pair of bottom quarks. In order to distinguish signal from background, a feed-forward neural network that uses kinematic variables together with various invariant masses of pairs of $b$-jets is used in the fits for the various mass hypotheses. The method, strategy and preliminary results for both FCNC decays $t\rightarrow cX$ and $t\rightarrow uX$ will be presented.
The production of light nuclei and antinuclei in particle collisions can be described as the coalescence of final state nucleons close in phase space. In heavy ion collisions, it is usually assumed that the formation probability is controlled by the size of the interaction region, while nucleon momentum correlations are either neglected or treated as a collective effect. Interestingly, recent experimental data on nucleus and hadron production in $pp$ collisions at LHC shows evidence for such collective behaviour. This is in strong contradiction to the standard assumption that the coalescence probability in small interacting systems, such as $e^+e^-$ or $pp$ collisions, are controlled by their momentum distribution. In this talk, however, we argue that such data are naturally explained using QCD inspired event generators if both nucleon momentum correlations and the size of the emission volume of nucleons are considered. In order to consider both effects simultaneously, we employ a per-event coalescence model based on the Wigner function representation of the nucleus state. The model predicts the size and $p_T$ dependence of the source volume measured at LHC, and it has therefore no free parameters. Finally, we comment on the validity of the underlying assumptions of the femtoscopy framework in small interacting systems and its relation to nuclear coalescence.
Based on a 448 million $\psi(2S)$ sample, several decay channels of
charmonium states have been searched for at BESIII recently. The decays of $\chi_{c1} ->
\Xi^0 \bar{\Xi^0}$ and $\chi_{c2} -> \Xi^0 \bar{\Xi^0}$, $\psi(2S) -> \Xi^0(1530) \bar{\Xi^0}(1530)$, $\psi(2S)
-> \Xi^0(1530) \bar{\Xi^0}$, and $\psi(2S) -> \omega K_s K_s$ have been observed for the first time. Using data sample above 4.0 GeV, the new decay modes of $h_c -> \pi^0 J/\psi$, $\psi_2(3823)-> \gamma \chi_c^2, \pi \pi J/\psi, \eta J/\psi, \pi^0 J/\psi$, and $\gamma \chi_c^0$ have
been searched for.
Open heavy flavor production is a crucial probe for the understanding of the QCD matter under extreme conditions created in heavy ion collisions. Heavy flavor quarks are produced predominantly in hard partonic scatterings at the very early stage of heavy ion collisions and experience the whole evolution of the hot and dense medium.ย Open heavy flavor production provides access to studying charm and beauty quark interactions with the hot and dense medium, soย called Quark Gluon Plasma (QGP). Through the measurements of open heavy flavor production in A+A and p+p collisions, the effects due to the QGP can be disentangled from those occuring in hadronic interactions without QGP formation, based on the expectation that these effects are less prominent or absent in p+p collisions.
In this talk, we will present results of open heavy flavor production, for exampleย nuclear modification factors of identified charmed hadrons and of beauty and charm hadron decayed electrons, measured with the STAR experiment at the Relativistic Heavy Ion Collider. These results will be compared with theoretical calculations and physics implications will be discussed.
Leptonic CP violation is one of the most important topics in neutrino physics. CP violation in
the neutrino sector is also strongly related to the nature of the neutrino: whether it is a Dirac
or a Majorana particle. In this contribution CP-violating effects in Majorana neutrino
oscillations in supernova media are studied. We show that resonances in neutrino-
antineutrino oscillations induced by strong magnetic fields of astrophysical objects appear in
the case of nonzero CP-violating phases. Our findings suggest a potential astrophysical setup
for studying the nature of neutrino masses and leptonic CP violation and may be important
for future neutrino experiments, for example, such as JUNO and Hyper-Kamiokande.
Based on: A.Popov, A.Studenikin, โManifestations of nonzero Majorana CP-violating phases
in oscillations of supernova neutrinosโ, Phys.Rev.D 103 (2021) 11, 115027.
We present two modules as part of the Czech Particle Physics Project (CPPP). These are intended as learning tools in masterclasses aimed at high-school students (aged 15 to 18). The first module is dedicated to the detection of an Axion-Like-Particle (ALP) using the ATLAS Forward Proton (AFP) detector. The second module focuses on the reconstruction of the Higgs boson mass using the Higgs boson golden channel with four leptons in the final state. The modules can be accessed at the following link: http://cern.ch/cppp
The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose neutrino experiment with a 20 kton Liquid Scintillator detector. The primary goal of JUNO is determination of the neutrino mass ordering by measuring the reactor anti-neutrinos. There are 20,012 20-inch PMTs equipped for JUNO, including 15,012 MCP PMTs and 5000 dynode PMTs, which is the largest 20-inch PMT sample in the world up to date. To achieve the unprecedent energy resolution of 3% @1MeV, the 20-inch PMTs need to have high PDE (photon detection efficiency, >27%) for the photons from the liquid scintillator, high optical coverage (>75%) on the stainless-steel truss of 40 m in diameter, and high reliability (< 0.5% loss at least for 6 years) in the water pool of 44 m deep. Instrument these PMTs for JUNO, including performance testing, waterproof potting and implosion protecting, were started from several years ago, and now most of the work are done, with a test result showing that the average PDE for MCP PMTs reaches 30%๏ผand the average PDE for all 20012 PMTs reaching 29.6%. In this poster, a summary of the overall status and results for PMT testing, potting and protecting will be presented, including also the preparations for the PMT installation at the JUNO underground hall.
We present an overview of the use of IR-improvement of unintegrable singularities in the infrared regime via amplitude-based resummation in QED X QCD โ SU(2)$_L$ X U$_1$ X SU(3)$^c$. We work in the context of precision LHC/FCC physics. While illustrating such IR-improvement in specific examples, we discuss new results and new issues.
We start with an introduction to the theory of neutrino electromagnetic properties [1-5]. Then we consider experimental constraints on neutrino magnetic ยตฮฝ and electric dฮฝ moments, millicharge qฮฝ, charge radii <rฮฝ2> and anapole aฮฝ moments from the terrestrial experiments (the bounds from MUNU, TEXONO and GEMMA experiments, as well as from Super-Kamiokande and Borexino). A special credit is done to severe constraints on ยตฮฝ , qฮฝ and <rฮฝ2> [6-10]. The best reactor [6] and solar [7] neutrino and astrophysical [11,12] bounds on ยตฮฝ, as well as bounds on qฮฝ from the reactor neutrinos [8] are included in the recent issues of the Review of Particle Physics (PRD). The best astrophysical bound on qฮฝ [13], the most severe astrophysical bound on ยตฮฝ [14] and new results on ยตฮฝ and qฮฝ of the CONUS experiment [15] are reviewed.
In the recent studies [16] it is shown that the results of the XENON1T collaboration [17] at few keV electronic recoils could be due to the scattering of solar neutrinos endowed with ๏ฌnite Majorana transition ยตฮฝ of the strengths lie within the limits set by the Borexino experiment with solar neutrinos [7]. The comprehensive analysis of the existing and new extended mechanisms for enhancing neutrino transition ยตฮฝ to the level appropriate for the interpretation of the XENON1T data and leaving neutrino masses within acceptable values is provided in [18].
Considering neutrinos from all known sources, including data from XENON1T and Borexino, the strongest up-to-date exclusion limits on the active-to-sterile neutrino transition ยตฮฝ are derived in [19] .
A comprehensive analisys of constraints on neutrino qฮฝ from experiments of elastic neutrino-electron interaction and future prospects involving coherent elastic neutrino-nucleus scattering is presented in [20].
We present results of the recent detailed study [21] of the electromagnetic interactions of massive neutrinos in the theoretical formulation of low-energy elastic neutrino-electron scattering. Using results of [21], on the basis of the COHERENT data [9] new bounds on the neutrino charge radii are obtained [10]. The obtained constraints on the nondiagonal neutrino charge radii [10] have been included by the Editors of Phys. Rev. D to โHighlights of 2018โ, and has been included by the PDG to the Review of Particle Physics.
The main manifestation of neutrino electromagnetic interactions, such as: 1) the radiative decay in vacuum, in matter and in a magnetic field, 2) the neutrino Cherenkov radiation, 3) the plasmon decay to neutrino-antineutrino pair, 4) the neutrino spin light in matter, and 5) the neutrino spin and spin-flavour precession are discussed. Phenomenological consequences of neutrino electromagnetic interactions (including the spin light of neutrino [22]) in astrophysical environments are also reviewed.
We also discuss: 1) new effects in neutrino spin, spin-flavour and flavor oscillations under in the transversal matter currents [23, 24] and magnetic field [25,26], 2) our newly developed approach to the problem of the neutrino quantum decoherence [27] and 3) also our recent proposal [28] for an experimental setup to observe coherent elastic neutrino-atom scattering (CEฮฝAS) using antineutrinos from tritium decay and a liquid helium target (the predicted sensitivity to ยตฮฝ is 7ร10โ13ฮผB).
In [29] we investigate effects of non-zero Dirac and Majorana CP violating phases on neutrino-antineutrino oscillations ฮฝ e โ ฮฝยฏe, ฮฝe โ ฮฝยฏยต and ฮฝe โ ฮฝยฏฯ in a magnetic field of astrophysical environments (the results are of interest for future experiments JUNO, DUNE and Hyper-Kamiokande).
In the talk we also trace, following the latest studies [30], how the search for neutrino magnetic and electric moments in low-energy neutrino scattering experiments are sensitive to the Hamiltonian fundamental parameters.
The best world experimental bounds on neutrino electromagnetic properties are confronted with the predictions of theories beyond the Standard Model.
[1] A. Studenikin, Neutrino magnetic moment: A window to new physics, Nucl.Phys.B Proc.Suppl, 188 (2009) 220.
[2] C. Guinti and A. Studenikin, Neutrino electromagnetic interactions: A window to new physics, Rev. Mod. Phys. 87 (2015) 531-591.
[3] C. Giunti, K. Kouzakov, Y. F. Li, A. Lokhov, A. Studenikin, S. Zhou, Annalen Phys. 528 (2016) 198.
[4] A. Studenikin, PoS EPS-HEP2017 (2017) 137.
[5] A. Studenikin, PoS ICHEP2020 (2021)180.
[6] A. Beda, V. Brudanin, V. Egorov et al., Adv. High Energy Phys. 2012 (2012) 350150.
[7] M. Agostini et al (Borexino coll.), Phys. Rev. D 96 (2017) 091103.
[8] A. Studenikin, Europhys. Lett. 107 (2014) 21001.
[9] D. Papoulias, T. Kosmas, Phys. Rev. D 97 (2018) 033003.
[10] M. Cadeddu, C. Giunti, K. Kouzakov, Y.F. Li, A. Studenikin, Y.Y. Zhang, Phys. Rev. D 98 (2018) 113010.
[11] N. Viaux, M. Catelan, P. B. Stetson, G. G. Raffelt et al., Astron. & Astrophys. 558 (2013) A12.
[12] S. Arceo-Dรญaz, K.-P. Schrรถder, K. Zuber and D. Jack, Astropart. Phys. 70 (2015) 1.
[13] A. Studenikin, I. Tokarev, Nucl. Phys. B 884 (2014) 396-407.
[14] F. Capozzi and G. Raffelt, Phys.Rev.D 102 (2020) 083007, arXiv:2007.03694v4 (24 Mar 2021).
[15] H. Bonet et al. (CONUS Collaboration), e-Print: 2201.12257 [hep-ex].
[16] O. G. Miranda, D. K. Papoulias, M. Tรณrtola, J. W. F. Valle, Phys.Lett. B 808 (2020) 135685.
[17] E. Aprile et al. [XENON], Phys. Rev. D 102 (2020) 072004.
[18] K. Babu, S. Jana, M. Lindner, JHEP 2010 (2020) 040.
[19] V. Brdar, A. Greljo, J. Kopp, T. Opferkuch, JCAP01 (2021) 039.
[20] A. Parada, Adv.High Energy Phys. 2020 (2020) 5908904.
[21] K. Kouzakov, A. Studenikin, Phys. Rev. D 95 (2017) 055013.
[22] A. Grigoriev, A. Lokhov, A. Studenikin, A. Ternov, JCAP 1711 (2017) 024 (23 p.).
[23] A. Studenikin, Phys. At. Nucl. 67 (2004) 993.
[24] P. Pustoshny, A. Studenikin, Phys. Rev. D 98 (2018) 113009.
[25] A. Popov, A. Studenikin, Eur. Phys. J. C 79 (2019) 144.
[26] P. Kurashvili, K. Kouzakov, L. Chotorlishvili, A. Studenikin, Phys. Rev. D 96 (2017) 103017.
[27] K. Stankevich, A. Studenikin, Phys. Rev. D 101 (2020) 056004.
[28] M. Cadeddu, F. Dordei, C. Giunti, K. Kouzakov, E. Picciau, A. Studenikin, Phys. Rev. D 100 (2019) 073014.
[29] A. Popov, A. Studenikin, Phys. Rev. D 103 (2021) 115027.
[30] D. Aristizabal Sierra, O.G. Miranda, D.K. Papoulias, G. Sanchez Garcia, Phys.Rev.D 105 (2022) 035027.
Magnetic Monopoles are one of the inevitable predictions of GUT theories. They are produced during phase transition in the early universe, but also mechanisms like Scwinger effect in strong magnetic fields might be taken into account. I will show that from the detection of an intergalactic magnetic field of primordial origin we can infer additional bounds on the magnetic monopole number density at present time. I will also discuss the implications of this bound for monopole pair production in primordial magnetic fields.
The COMPASS experiment saw a potential new hadron resonance, the
$\text{a}_1(1420)$, that does not fit into the quark model. Its existence
can be independently verified in the semi-leptonic decay of
$\tau^-\to\pi^-\pi^-\pi^+\nu_\tau$. Also, such a study can reveal a clear
picture on the $\text{a}_1(1260)$ axial vector meson parameters, and test
the presence of the pseudoscalar and spin-exotic contributions. Moreover,
the results of the study can be used later in the measurements of the $\tau$
electric and magnetic dipole moments and $\tau$ Michel parameters. We
present the preliminary results of the $\tau^-\to\pi^-\pi^-\pi^+\nu_\tau$
decay with the Belle detector at the KEKB energy-asymmetric
$\text{e}^+\text{e}^-$ collider using partial-wave analysis technique.
The LHC is restarting the Run-3 operation with keeping longer time with an instantaneous luminosity of about 2.0ร10^34 cm-2s-1 from this year to 2025. In order to cope with the high event rate, upgrades of the ATLAS Level-1 Muon trigger system were required. The Level-1 Muon trigger system identifies muons with high transverse momentum by combining data from a fast muon trigger detector, Resistive-Plate Chamber (RPC) and Thin-Gap Chamber (TGC). Since Run 3, the system introduces improves the trigger logic using the new detectors called the New-Small-Wheel (NSW) and RPC-BIS78, which will be installed in the inner station region for the endcap muon trigger. Finer track information from the NSW and RPC-BIS78 can be used as part of the muon trigger logic to enhance performance significantly. In order to handle data from both TGC and NSW, some new electronics have been developed, including the trigger processor board known as Sector Logic (SL). The SL board has a modern FPGA to make use of Multi-Gigabit transceiver technology, which will be used to receive data from the new detectors. The readout system for trigger data has also been re-designed, with the data transfer implemented with TCP/IP instead of a dedicated ASIC. This makes it possible to minimize the use of custom readout electronics and instead use some commercial PCs and network switches to collect, format, and send the data. This readout data is useful for performance validations and further improvements. This presentation describes the aforementioned upgrades of the level-1 Muon trigger system. Particular emphasis will be placed on the first results from the early phase of commissioning in 2022. The newest status about coming improvements and the expected performance will also be presented.
Identification of hadronic jets originating from heavy flavor quarks in the final state is extremely important to study the properties of the top quark and the Higgs boson, along with various searches for signatures of new physics beyond the standard model. The latest developments in the identification algorithms based on deep learning methods make it an interesting topic also from a technical perspective. In this talk, a summary of various identification algorithms along with their performance in simulation and pp collision data, in boosted and resolved topologies, will be presented. in addition, the possible improvements in the existing algorithms required to cope with the challenges at the high luminosity LHC will be discussed.
The ATLAS trigger system underwent major upgrades between 2018-2022. In particular, the level-1 calorimeter (L1Calo) hardware trigger has been upgraded, and tracking introduced in the software-based missing transverse momentum triggers. This talk will present preliminary performance results using updated algorithms.
Magnetic monopoles have yet been observed despite decades of efforts. KoreA Experiment on Magnetic Monopole (KAEM) searches for fundamental magnetic monopoles in the low-mass and low-charge region. KAEM is configured with a thin aluminum target, sodium-22 source, two 1 Tยทm solenoids, about 3 m long vacuum chamber, two electromagnetic calorimeters, and the trigger-veto detector. The LYSO, CsI, and CsI(Tl) crystals, used widely in nuclear/particle physics experiments, are candidates for the trigger-veto detector and electromagnetic calorimeters. We investigated the characteristics and the performance of those crystals to decide which type of crystal satisfies the requirements of our experiment. In addition, these crystals were tested with a customized DAQ system and tens of MeV electrons and gammas.
This talk will present the characteristic of several types of crystals and beam test results obtained with the customized DAQ system.
In the linear seesaw framework, we analyse the implications of modular $ A^\prime_5$ symmetry on neutrino oscillation phenomenology. To preserve the holomorphic aspect of the superpotential, we incorporate six heavy fermion superfields along with a pair of weightons to establish the well defined mass structure for the light active neutrinos as needed by the linear seesaw mechanism. Modular symmetry has the advantage of considerably reducing the need of flavon fields. Furthermore, the Yukawa couplings alter non-trivially under the flavour symmetry group and are described in terms of Dedekind eta functions, whose $q$ expansion simplifies computations numerically. We show that the model framework meticulously accounts for all neutrino oscillation data. In addition, we investigate the implications of CP asymmetry resulting from the lightest heavy fermion decay in explaining the observed baryon asymmetry through leptogenesis.
Axion and axion-like-paricles (ALPs) are well motivated cold dark matter candidates. Nevertheless, an astoundingly huge parameter space remains unexplored despite much effort, ranging from fuzzy dark matter at $m_a\sim 10^{-22}$ eV to light dark matter at $m_a\sim $ keV. Most experimental ALP searches rely on the characteristic two-photon--ALP coupling. This coupling has a number of interesting observational consequences, such as a mixing between photon and ALPs when the photon propagates through an external magnetic field. In this talk, we discuss the signatures that ALPs imprint on high energy photon spectra from astrophysical sources due to photon-ALP oscillations. In particular, we present a model independent statistical test designed to search for these signatures and that may improve current experimental sensitivities significantly. The focus is on photon energies relevant for the upcoming Cherenkov Telescope Array (CTA) and on oscillations in extragalactic magnetic fields.
Reconstructing the type and energy of isolated pions from the ATLAS calorimeters is a key step in the hadronic reconstruction. The baseline methods for local hadronic calibration were optimized early in the lifetime of the ATLAS experiment. Recently, image-based deep learning techniques demonstrated significant improvements over the performance over these traditional techniques. This poster presents an extension of that work using point cloud methods that do not require calorimeter clusters or particle tracks to be projected onto a fixed and regular grid. Instead, transformer, deep sets, and graph neural network architectures are used to process calorimeter clusters and particle tracks as point clouds. This note demonstrates the performance of these new approaches as an important step towards a full deep learning-based low-level hadronic reconstruction.
To characterize the Near Infrared Spectro-Photometer (NISP) instrument optical capability before the launch in orbit of the Euclid telescope, foreseen in 2023, data analysis of ground-based campaign tests made in laboratory as well as Monte Carlo simulations that mimic the expected NISP performances have been perfomed.
These pre-launch tests have been analyzed to assess the fulfillment of the mission specifications in terms of Point Spread Function (PSF), i.e. EE50(PSF) โค 0.3 pixel, and spectral calibration, i.e. ฯ(ฮป) < 1 pixel or ฮz/z โค 0.001, as well as to provide a first comparison between real images from the ground-based campaign tests and simulated images.
We confirm the high optical quality of the NISP instrument, fulfilling the mission specifications in terms of PSF and spectral calibration with a great stability between the different campaign tests. A first comparison between simulations and data obtained from the ground-based campaign tests will be provided.
Cosmic Rays (CR) inside the Heliosphere interact with the solar wind and with the interplanetary magnetic field, resulting in a temporal variation of the cosmic ray intensity near Earth for rigidities up to few tens of GV. This variation is known as Solar Modulation. Previous AMS results on proton and helium spectra showed how the two fluxes behave differently in time. To better understand these unexpected results, one could therefore study to the next most abundant species. In this contribution, the precision measurements of the monthly proton, helium, carbon and oxygen fluxes for the period from May 2011 to Nov 2019 with the Alpha Magnetic Spectrometer on the International Space Station are presented. The detailed temporal variations of the fluxes are shown up to rigidities of 60 GV. The time dependence of the C/O, He/(C+O), p/(C+O), and p/He are also presented and their implication on the shape of the nuclei LIS is discussed.
A new detector capable of measuring the LHC luminosity has been installed at the interaction point of LHCb. It is named Probe for LUminosity MEasurement - PLUME. This detector is undergoing commissioning and will operate throughout LHC Run 3. It will enable real time monitoring of beam condition parameters such as luminosity, number of visible interactions per bunch crossing, background; it will cross-check the LHC filling scheme in real time, and contribute to the centrality determination for the LHCb fixed-target programme. The detector is based on the detection of Cherenkov light produced in quartz material by charged particles coming upstream from the LHCb collision region. PLUME is charged with providing both online and offline measurements with a time response that can be as fast as a fraction of a second, and it will ensure the vital luminosity-levelling procedure at LHCb and act as real-time alarm for LHC.
A momentum charge correlation ratio observable $r_{c}$, generalized from the balance function [1], is measured using data recorded with the H1 experiment at HERA during 2003 to 2007. This variable distinguishes between same-sign and opposite-sign charged particle pairs[2] in a jet. The average $r_{c}$ is studied for two configurations (prongs) of the leading particles in the jet, defined with the help of declustering in a recursive soft drop technique. When resolved as as a function of other kimenatic variables, such as the formation time, this probes the transition from non-perturbative to perturbative aspects of QCD. This sets the path for a novel way of studying jet substructure and the evoluation of partons in a jet. The data of $r_{c}$ at different prongs reveal differences between the first and subsequent splits. Data are confronted with predictions from various event generators.
H1prelim-22-032
[1] S. A. Bass, P. Danielewicz and S. Pratt, Phys. Rev. Lett. 85 (2000), 2689-2692 doi:10.1103/PhysRevLett.85.2689 [arXiv:nucl-th/0005044].
[2] Y. T. Chien, A. Deshpande, M. M. Mondal and G. Sterman, [arXiv:2109.15318]
The coherent elastic neutrino-nucleus scattering process (CE$\nu$NS) is a powerful probe of the possible new neutral boson in the theory beyond the standard model, which is a possible explanation of the muon $(gโ2)_\mu$ anomaly. CE$\nu$NS was first observed in 2017 at the COHERENT experiment by the cesium-iodide (CsI) detector and later in 2020 at the argon (Ar) detector. Recently, the new result of the CsI detector is also updated in 2021.
In this poster, we present the constraints on the parameters of several light boson mediator models obtained from the combined analysis of the latest data of the COHERENT CE$\nu$NS experiment. We consider a variety of vector boson mediator models and also a model with a new light scalar boson mediator. We compare these constraints with the limits obtained in other experiments, and with the values that can explain the muon $(gโ2)_\mu$ anomaly in the models where the muon couples to the new boson mediator.
The measurement of hadronic resonance production in heavy-ion collisions at the LHC has led
to the observation of a prolonged hadronic phase after hadronisation. Due to their short
lifetimes, resonances experience the competing effects of regeneration and rescattering of the
decay products in the hadronic medium. Studying how the experimentally measured yields are
affected by these processes can extend the current understanding of the properties of the
hadronic phase and the mechanisms that determine the shape of particle transverse momentum
spectra.
This contribution presents new preliminary results on the production of the ฮ(1520) resonance
measured in Pb-Pb collisions at $โs_{NN}$ = 5.02 TeV with the ALICE detector at the LHC. These
results are compared with those from a set of hadronic resonances with a lifetime span of 1 to
46 fm/c such as ฯ(770)$^0$, ฮ*(892)$^0$, ฮฃ(1385)$^{ยฑ}$, ฮ(1530)$^0$ and ฮฆ(1020) measured by the ALICE
experiment. The spectral shapes, mean $\it{p}_{T}$ and particle ratios are compared with those from the
Blast-Wave, MUSIC with a SMASH afterburner and statistical hadronisation model predictions.
The Compact Muon Solenoid (CMS) experiment is a general-purpose detector installed in the Large Hadron collider (LHC). The High Luminosity-LHC (HL-LHC) will provide 10 times higher luminosity compared to the design of the LHC. To accommodate this increase and to enhance the performance of the CMS experiment, the forward region of the muon system will be equipped with 3 new sets of stations employing triple-foil Gas Electron Multiplier (GEM) detectors. These stations, GE1/1, GE2/1, and ME0, will enhance acceptance, longevity, redundancy, and triggering efficiency while operating in the harsh radiation environment of the HL-LHC. The GE1/1 stations were installed during the technical stop of 2016/2017. The GE2/1 stations will be installed during the year end technical stop (YETS) of 2023/2024. GE2/1 detector construction started in 2021 utilizing advanced quality controls (QC) and performance checks. We describe the GE2/1 chamber geometry, the multi-institutional production of chambers, the chamber assembly and QC procedures, and results from QC measurements.
Recent measurements in high multiplicity proton-proton collisions have shown the emergence of several features that are reminiscent of QGP phenomenology, one of which is the enhanced production of strange and multi-strange hadrons with respect to non-strange ones. Strange hadron production represents a key probe to study QGP formation in hadronic collisions as well as to understand the microscopic mechanisms behind hadronisation.
In this context, the ฯ meson is certainly a probe of choice for the study of strangeness production and strangeness enhancement altogether. A deeper knowledge of the production probability of single and multiple ฯ mesons can help validate or disqualify the inner workings of a given phenomenological model through comparisons to related Monte Carlo generators.
Multiple ฯ meson production measurement can also be combined with the measurement of the inclusive ฯ-meson yield to allow a number of new insights on the number of ฯ-meson production distribution (the probability of producing N ฯ mesons in one event). Important among these is the fact we can measure the variance of such distributions and determine how it compares to a Poissonian distribution. Furthermore, the 2-Dimensional nature of the yield spectrum of ฯ-meson pairs gives new perspectives for the study of the dynamics of the particle production, notably by measuring the mean pT of the spectra of a ฯ meson in events where a second ฯ meson is produced with a given pT. An overview of such results is presented and will be discussed in comparison to Monte Carlo generators.
CJPL is an ideal place for low background facilities due to its deepest rock overburden. To prepare for future liquid scintillator based experiments such as solar neutrino observation or 0$\nu\beta\beta$ searching, Jinping 1-t prototype is built for measuring various backgrounds and verify new technologies. In 2017-2020, it has detected numerous MeV radioactive background events, hundreds of high energy muons as well as muon induced neutrons. Radioactive isotope (U, Th, Rn) contamination in liquid scintillator is studied and measured. The radioactivity of LS will be further suppressed after the distillation system is online. Muon flux and neutron yield is given, too. Those results indicates that CJPL is an ideal place for low background experiments. We are making steady progresses on lowering radioactive isotopes of materials, to prepare for future detectors.
A combination of projection studies of non-resonant Higgs boson pair production is performed in the bbyy and bbtautau decay channels with the ATLAS detector, assuming 3000/fb of proton-proton collision data at a center-of-mass energy of sqrt{s} = 14 TeV at the HL-LHC. The projected results are based on extrapolations of the Run 2 analyses conducted with 139/fb data at $\sqrt{s}$ = 13 TeV. In addition to the increased luminosity and center-of-mass energy at the HL-LHC, both experimental and theoretical systematic uncertainties are expected to be reduced relative to their Run 2 values. The projected results are expressed in terms of the significance for the observation of the Standard Model Higgs boson pair production, and the constraint on Higgs boson trilinear self-coupling modifier kLambda.
The magnetized iron calorimeter (ICAL) detector proposed at the India-based Neutrino Observatory will be a 51 kton detector made up of 151 layers of 56 mm thick soft iron layers with 40 mm air gap in between where the RPCs, the active detectors, are placed. The main goal of ICAL is to make precision measurements of the neutrino oscillation parameters using the atmospheric neutrinos as source. The charged current (CC) interactions of the atmospheric muon neutrinos and anti-neutrinos in the detector produce charged muons. The magnetic field, with a maximum value of โผ 1.5 T in the central region of ICAL, is a critical component since it will be used to distinguish the charges and determine the momentum and direction of these muons. The geometry of the ICAL has been optimized to detect muons in the energy range of 1-15 GeV. It is difficult to measure magnetic field inside iron, therefore measuring field using external methods can introduce error. In this study the effect of error in measurement of magnetic field in ICAL is studied. An attempt is made to know how the uncertainty in the magnetic field values will propagate in the reconstruction of momentum and other aspects of the Physics analysis of the data from ICAL detector using GEANT4 simulations.
The Alpha Magnetic Spectrometer is a high energy spectrometer onboard of the international space station taking data since 2011 continuously. AMS detected a component of Z>2 ions with rigidities below the rigidity cutoff and located in the South Atlantic Anomaly crossing the instrument from both down-going and up-going directions.
Portal sectors, in the form of new fermions and scalars beyond the Standard Model, are among the simplest possibilities connecting the Standard Model to dark matter. However, minimal realizations of this idea often lead to troublesome cosmological histories or are in tension with dark matter detection experiments. I will discuss possible solutions to these issues in the context of dark matter portal models with heavy fermions, and their related collider signatures which offer complementary and testable probes of these scenarios. Solutions involving mixing of heavy fermions with Standard Model fermions through new scalars also lead to indirect tests from future precision measurements.
The Deep Underground Neutrino Experiment (DUNE) is an international particle physics experiment and its primary scientific objective is a precision measurement of neutrino oscillation parameters. While the experiment was designed to focus on understanding neutrinos accurately, DUNE's unique experimental environment is expected to provide excellent opportunities for the potential discovery of new particles and the unveiling of new interactions and symmetries beyond the Standard Model (BSM). DUNE will consist of two detector complexes and the beam source. The beam will have an initial 1.2 MW of power, with a corresponding protons-on-target of 1.1ร10^21, upgradable to multi-megawatt power. The Near Detector complex will be located 574 m from the neutrino source and will consist of a liquid argon Time Projection Chamber (TPC), a magnetized gaseous argon TPC, and a large, magnetized, beam monitor. The Far Detector complex will be located 1.5 km underground at the Sanford Underground Research Facility (SURF) in South Dakota, at a distance of 1300 km from the neutrino source, and will consist of a 70kt liquid argon TPC. This environment provides excellent conditions to probe many BSM physics topics, and we will review those various BSM scenarios and discuss their prospects at DUNE.
As an underground multi-purpose neutrino detector with 20 kton liquid scintillator, Jiangmen Underground Neutrino Observatory (JUNO) has great potential to detect the diffuse supernova neutrino background (DSNB). Depending on the latest knowledge about the average supernova neutrino spectrum, the star-formation rate, and the ratio of the failed black-hole-forming supernovae, it is predicted to have about 4-8 events per year within the optimal observation window from 12 MeV to 30 MeV.
We employ the latest information on the DSNB flux predictions, and investigate in detail the background and its reduction for the DSNB search at JUNO. The dominant background is from the neutral-current (NC) interaction of atmospheric neutrinos with ${}^{12}$C nuclei, whose uncertainty is carefully evaluated from both the spread of model prediction and an envisaged in situ measurement. We also make a careful study on the background suppression with the pulse shape discrimination (PSD) and triple coincidence (TC) cuts. Finally, we present the latest evaluation of the DSNB sensitivity with JUNO.
Nucleon Decay is one of the apparent consequences of Baryon Number Violation, as predicted in many Grand Unified Theories (GUTs). It could give an explanation to the asymmetry of matter and anti-matter in the universe. Many experiments have been constructed to search for the nucleon decays while no clues are found. Jiangmen Underground Neutrino Observatory (JUNO), with more than 40k PMTs around the 20 kton liquid scintillator detector, is expected to be sensitive to many of the predicted decay modes of nucleon. In this poster, prospects will be introduced based on our recent progresses on searching for the nucleon decays in JUNO.
Precise knowledge of proton parton distribution functions is a crucial element of accurate predictions of both Standard Model and Beyond Standard Model physics at hadron colliders such as the LHC. We present a PDF fit at next-to-next-to-leading order in QCD demonstrating the constraining power of a diverse range of ATLAS measurements, in combination with deep-inelastic scattering data from HERA, on the parton distributions within the proton. Careful consideration is made of the correlation of systematic uncertainties within and between the ATLAS datasets. The resulting set of parton distribution functions, named ATLASpdf21, is evaluated for two choices of chi2 tolerance and compared to a range of global PDF fits.
Nowadays Machine Learning (ML) techniques are successfully used in many areas of High-Energy Physics (HEP), e.g. in detector simulation, object reconstruction, identification, Monte Carlo generation. ML will play a significant role also in the upcoming High-Luminosity LHC (HL-LHC) upgrade foreseen at CERN, when a huge amount of data will be produced by LHC and collected by the experiments, facing challenges at the exascale. To favor the usage of ML in HEP analyses, it would be useful to have a service allowing to perform the entire ML pipeline (in terms of reading the data, training a ML model, and serving predictions) directly using ROOT files of arbitrary size from local or remote distributed data sources.
The MLaaS4HEP framework is an R&D project inside CMS providing such kind of solution. It was successfully validated with a CMS physics use case which gave important feedback about the needs of analysts. For instance, we introduced the possibility for the user to provide pre-processing operations, such as defining new branches and applying cuts.
To provide a real service for the user and to integrate it into the INFN Cloud, we started working on MLaaS4HEP cloudification. This would allow to use cloud resources and to work in a distributed environment. In this work we provide updates on this topic, in particular we discuss our first working prototype of the service. It includes an OAuth2 proxy server as authentication layer, a MLaaS4HEP server, an XRootD proxy server for enabling access to remote ROOT data, and the TFaaS service in charge of the inference phase. With this architecture the user is able to submit a ML pipeline, after being authenticated, using local or remote ROOT files simply using HTTP calls.
Neutrinos produced in an early stage of the Big Bang are believed to pervade the Universe.
The Ptolemy project is studying novel experimental techniques to observe this relic cosmological background neutrinos and to eventually study their flux and compare it with cosmological models.
This requires to face challenges in material technologies and radio-frequency radiation detection associated in a novel type of electromagnetic spectrometer. It will be employed to observe the electrons emerging from a tritium target, used to absorb the relic neutrinos.
Ptolemy is entering the construction phase for the first complete high precision measurement module. The current status and outlook of the project is presented.
In this study, we use PYTHIS8.2 for the simulation of Multiparton Interactions using different PDF sets from LAHPDF6. Altogether five parameters were selected for the final tune depending on their sensitivity to the selected observables at 13TeV published by ATLAS Collaboration. Simulated experimental analysis data is obtained using the Rivet analysis toolkit. These tunes describe the selected data reasonably well. These tuning results are also compared with the other popular choices.
This study is based on a survey conducted during the International Masterclasses days taken place in almost all the Italian Universities during, before and after the Covid-19 pandemic. About 1400 students per year, mostly enrolled in scientific high schools, performed data analysis using real data collected at high energy physics experiments (ALICE, ATLAS, Belle II, CMS, LHCb), about 100 students per year familiarized with the actual operation technique used for cancer treatment employing x-rays (Particle Therapy). The students used the computing centers of their closer University or worked at home using their own PC during the Covid-19 pandemic. A sample of 213 students participated in the survey in 2018, 400 students answered to the questions at the remote edition of 2021 and 138 students participated in the survey in 2022.
Answers show a constant and significant appreciation in the activity with 98% of positive rating and 70% of the very positive rating. The comparison between the three different years showed a progressive decrease of interest in physics from 65% to 57% and a small decrease of interest in a technical or scientific profession or a research profession in โhardโ science matters from 86% to 83%. The reasons of the negative interest in science changed significantly: search of a greater income decreased from 40% to 12%, the feeling of inadequacy increased from 40% to 70%.
The significance of this survey is not easy to be estimated considering only 5% of students of all the fourth and fifth grades of their institute participated on a voluntary basis in this activity.
The worrisome aspect of the decreased interest in physics is that the sample consists of students enrolled in scientific high schools available to follow an optional activity mainly focused on the frontiers of research in physics.
The phenomenon of neutrino oscillations emerges due to coherent superposition of neutrino mass states. An external environment can modify a neutrino evolution in a way that the coherence is violated. Such a violation is called quantum decoherence of neutrino mass states and leads to the suppression of flavor oscillations. We overview our recent results on neutrino flavour oscillations accounting for the quantum decoherence of neutrino mass states. The influence of the neutrino quantum decoherence on collective neutrino oscillations in supernovae burst is also discussed.
[1] K.Stankevich, A.Studenikin, Neutrino quantum decoherence engendered by neutrino radiative decay, Phys. Rev. D 101 (2020) 056004.
[2] K.Stankevich, A.Studenikin, Collective neutrino oscillations accounting for neutrino quantum decoherence, PoS ICHEP2020 (2021) 216.
The ATLAS upgrade for HL-LHC operation includes the installation of an entirely new all-silicon Inner Tracker (ITk). The silicon strip region comprises 165m^2 of instrumented area, made possible by the mass production of silicon strip sensors. This area is covered in a nearly hermetic way. Multiple sensor shapes are utilized: square sensors in the barrel part and skewed trapezoidal sensors with curved edges to provide continuous coverage of the disc surface in the endcap part of a detector. As a result, there are 8 different strip sensor types in the system. They all feature AC-coupled n+-in-p strips with polysilicon biasing, developed to withstand the total fluence of 1.6x10^15n_eq/cm^2 and the total ionizing dose of 66 Mrad. Following many years of R&D and 4 prototype submissions and evaluations, in 2020 the project transitioned into pre-production, where 5% of the total volume was produced in all 8 designs. In this contribution, we will summarize the evaluation program, test results, and experience with the pre-production sensors.
The MicroBooNE detector is a liquid argon time projection chamber (LArTPC) which recently finished recording neutrinos from both the Booster Neutrino Beam and the Neutrinos at the Main Injector beam at Fermilab. One of the primary physics goals of MicroBooNE is to make detailed measurements of neutrino-argon scattering cross sections, which are critical for the success of future neutrino oscillation experiments. At neutrino energies relevant for the Short-Baseline Neutrino program, the most plentiful event topology involves mesonless final states containing one or more protons. A low reconstruction threshold enabled by LArTPC technology has allowed MicroBooNE to pursue a number of analyses studying neutrino-induced proton production. In this talk, we present several recent cross-section measurements of this reaction mode for both muon and electron neutrinos. The results include MicroBooNEโs first measurements of differential cross sections involving transverse kinematic imbalance and two-proton final states. A first look at lambda baryon production in neutrino-argon scattering is also presented.
At BESIII, the electromagnetic form factors (EMFFs) and the pair production cross
sections of various baryons have been studied. The proton EMFF ratio |GE/GM| is
determined precisely and line-shape of |GE| is obtained for the first time. The recent results of neutron EMFFs at BESIII show great improvement in comparison with previous experiments. Cross sections of various baryon pairs ($\Lambda, \Sigma, \Xi, \Lambda_c$) are studied from their thresholds. An anomalous enhancement behavior in the $\Lambda$ and $\Lambda_c$ pair cross sections is observed.
Latest results on inclusive and differential single top quark production cross sections are presented using the data collected by CMS. The single top quark analyses investigate separately the production of top quarks via t-channel exchange, via the associated production with a W boson (tW), and via the s-channel.
Having access to the parton-level kinematics is important for understanding the internal dynamics of particle collisions. In this talk, we present new results aiming to an efficient reconstruction of parton kinematics using machine-learning techniques. By simulating the collisions, we related experimentally-accessible quantities with the momentum fractions of the colliding partons. We used photon-hadron production to exploit the cleanliness of the photon signal, including up to NLO QCD-QED corrections. Neural networks led to an outstanding reconstruction efficiency, suggesting a powerful strategy for unveiling the behaviour of the fundamental bricks of matter in high-energy collisions.
The Jiangmen Underground Neutrino Observatory (JUNO) is a 20 kt liquid scintillation detector, which will be completed in 2023 as the largest of its kind. JUNO aims to determine the neutrino mass ordering by observing the energy dependent oscillation probabilities of reactor
anti-neutrinos.
JUNOs large volume provides the opportunity to detect atmospheric neutrino events with lower energies than todayโs large Cherenkov experiments. As atmospheric neutrinos reach the detector from all directions, partially experiencing the matter effect, they are especially interesting for observing the neutrino mass ordering, by measuring their oscillation probabilities.
This poster presents direction and energy reconstruction methods for atmospheric neutrino events at JUNO. The former uses a traditional approach, based on the reconstruction of the photon emission topology in the JUNO detector. For the energy reconstruction a traditional approach as well as a machine learning based, using Graph Convolutional Networks (GCNs), are shown.
Neutrinoless double beta decay (0$\nu\beta\beta$) is the most sensitive experimental probe to address the quest that whether neutrinos are Majorana or Dirac particles $[1]$. The observation of 0$\nu\beta\beta$ would not only establish the Majorana nature of neutrinos but also provide direct information on neutrino masses and probe the neutrino mass hierarchy. The present work $[2]$ would explore the required sensitivity for the upcoming projected 0$\nu\beta\beta$ experiments to probe the inverted mass hierarchy (IH) as well as non-degenerate (ND) normal mass hierarchy (NH). We studied the required exposures of 0$\nu\beta \beta$-projects as a function of the expected background (following โDiscovery Potential at 3$\sigma$ with 50% probabilityโ statistical scheme) before the experiments are performed. This work would address the crucial role of background suppression in the future 0$\nu\beta\beta$ experiments with sensitivity goals of approaching and covering ND-NH.
$[1]$ M. Agostini, G. Benato, J. A. Detwiler, J. Menรฉndez, F. Vissani, โToward the discovery of matter creation with neutrinoless double-beta decayโ, arXiv:2202.01787 (2022).
$[2]$ M. K. Singh, H. T. Wong, L. Singh, V. Sharma, V. Singh, and Q. Yue, โExposure-background duality in the searches of neutrinoless double beta decayโ, Phys. Rev. D 101, 013006 (2020).
We present the study of the massless dark photon in the $K_{L}^{0}\rightarrow\gamma\bar\gamma$ decay at the J-PARC KOTO experiment. The massless dark photon ($\bar\gamma$) is different from the massive one because it has no direct mixing with the ordinary photon, but it could interact with the SM particles through direct coupling to the quarks. In some theoretical predictions, the $\mathcal{BR}(K_{L}^{0}\rightarrow\gamma\bar\gamma)$ can be as large as $\mathcal{O}(10^{-3})$, which is well within the sensitivity of KOTO. Although the search for $K_{L}^{0}\rightarrow\gamma\bar\gamma$ could be challenging due to the lacking kinematic constraints, the hermetic veto system of KOTO provides a unique opportunity to probe for such decay. In this presentation, we will present the study of $K_{L}^{0}\rightarrow\gamma\bar\gamma$ based on the data collected in 2020.
This study presents a search for a new vector gauge boson $Z'$ predicted by the $L_{\mu}-L_{\tau}$
model with the ATLAS detector at the Large Hadron Collider.
The search is carried out in the final state with four muons (4$\mu$), using full data-set collected in Run 2 in pp collisions at $\sqrt s$ = 13 TeV, corresponding to an integrated luminosity of 139 fb$^{-1}$. A deep learning Neutral Networks classifier is used to
separate the $Z'$ signal from the Standard Model background events. The di-muon invariant masses paired in the $4\mu$ events are used to extract the $Z'$ resonance signature. No significant data excess is observed over the predicted background.
Upper limits at 95\% confidence level are set on the production cross-section times the decay branching fraction of $pp \rightarrow Z'\mu\mu \rightarrow 4\mu$, and on the coupling strength of the $Z'$ boson to $\mu$, $\tau$, $\nu_{\mu}$ and $\nu_{\tau}$.
An analysis of about 211 million $B^{0}$-$\bar{B}^{0}$ pairs produced in $e^+e^-$ collisions at the $\Upsilon(4S)$ resonance and recorded by the $BABAR$ experiment is used to search for the decay $B^{0}\to\psi_{D}\Lambda$, which produces the dark matter particle ($\psi_{D}$) and baryogenesis simultaneously. The hadronic recoil method has been applied with one of the $B$ mesons from $\Upsilon(4S)$ decay fully reconstructed, while only one $\Lambda$ baryon decaying into a proton and a charged pion is present in the signal $B$-meson side. The missing mass of signal $B_{sig}$ is considered as the mass of the dark particle $\psi_{D}$.The signal events of the decay $B^{0}\to\psi_{D}\Lambda$ are selected on the missing mass distribution in the range of 0.5 to 4.2 GeV/c$^2$ for 197 different $\psi_{D}$ mass hypotheses, and stringent upper limits on the decay branching fraction are derived.
A search for central exclusive production (CEP) of top quark pairs is presented using collision data collected by CMS and CT-PPS in 2017. A data-driven method to estimate the background from pileup protons is described, as well as the development of a BDT classifier to separate the exclusive top signal from the inclusive ttbar background. The first-ever upper limits on the cross-section of this process are shown.
A search for pair production of doubly charged Higgs ($H^{\pm \pm}$) bosons, each decaying into a pair of prompt, isolated, and highly energetic leptons with the same electric charge, is presented. The search uses a proton--proton collision data sample at a centre-of-mass energy of 13 TeV corresponding to 139 fb$^{-1}$ of integrated luminosity recorded during the Run 2 of the Large Hadron Collider by the ATLAS detector. This analysis focuses on same-charge leptonic decays, $H^{\pm \pm} \rightarrow \ell^{\pm} \ell^{\prime \pm}$, where $\ell, \ell^\prime=e, \mu, \tau$ in two-, three-, and four-lepton channels, but only considers final states which include electrons or muons. No evidence of a signal is observed. Corresponding limits on the production cross-section and consequently a lower limit on $m(H^{\pm \pm})$ are derived at 95% confidence level. Under the assumption that the branching ratios to each of the possible leptonic final states are equal, $\mathcal{B}(H^{\pm \pm} \rightarrow e^\pm e^\pm) = \mathcal{B}(H^{\pm \pm} \rightarrow e^\pm \mu^\pm) = \mathcal{B}(H^{\pm \pm} \rightarrow \mu^\pm \mu^\pm) = \mathcal{B}(H^{\pm \pm} \rightarrow e^\pm \tau^\pm) = \mathcal{B}(H^{\pm \pm} \rightarrow \mu^\pm \tau^\pm) = \mathcal{B}(H^{\pm \pm} \rightarrow \tau^\pm \tau^\pm) = 1/6$, the observed lower limit on the mass of a doubly charged Higgs boson is 1080 GeV, which represents an improvement over previous limits.
Providing a possible connection between neutrino emission and gravitational-wave (GW) bursts is important to our understanding of the physical processes that occur when black holes or neutron stars merge. In the Daya Bay experiment, using the data collected from December 2011 to August 2017, a search has been performed for electron-antineutrino signals coinciding with detected GW events, including GW150914, GW151012, GW151226,GW170104, GW170608, GW170814, and GW170817. We used three time windows of $\pm10$ s,$\pm500$ s,and $\pm1000$ s relative to the occurrence of the GW events, and a neutrino energy range of 1.8 to 100 MeV to search for correlated neutrino candidates. The detected electron-antineutrino candidates are consistent with the expected background rates for all the three time windows. Assuming monochromatic spectra, we found upper limits (90% confidence level) on electron-antineutrino fluence of $(1.13-2.44)\times10^{11} $ cm$^{-2}$ at 5 MeV to $8.0\times10^{7} $ cm$^{-2}$ at 100 MeV for the three time windows. Under the assumption of a Fermi-Dirac spectrum, the upper limits were found to be $(5.4-7.0)\times10^{9} $ cm$^{-2}$ for the three time windows.
In a neutrino system, the phenomenon of decoherence refers to the loss of coherence between the three neutrino mass eigenstates. The neutrino system, like any other system, is open to the environment and should be treated as such. Now as we know, the oscillation of neutrinos is caused by the coherent superposition of the neutrino mass eigenstates. But due to the open nature of the system, dissipative interactions between the neutrino sub-system and the environment lead to a loss of coherence with the propagation distance. As a result, the presence of decoherence in the neutrino sub-system alters the probabilities of neutrino oscillation. Herein, we use the Lindblad Master equation to examine the temporal evolution of the neutrinos, with decoherence as an additional term to account for the dissipative interaction with the environment. The effects of such interactions can be seen in the neutrino oscillation probabilities and this has been studied in our present work. We use the general framework developed to compute the modified neutrino oscillation probabilities and analyze the changes. In this study, we investigate how different values of the decoherence parameter affect oscillation probability. We'll present our understanding of the effect of decoherence on the neutrino probabilities in the long-baseline experiments.
CMS searches for exotic resonances are presented, based on the 13 TeV pp collision data.
Four top-quark production, a rare process in the Standard Model (SM) with a cross-section around 12 fb, is one of the heaviest final states produced at the LHC, and it is naturally sensitive to physics beyond the Standard Model (BSM). The central value of the cross section measured by ATLAS is twice as large as the SM prediction (albeit with large uncertainties). A follow-up analysis is the search for Heavy (pseudo)Higgs boson A/H produced in association with a top-antitop quark pair leading to the final state with four top quarks. The data analyzed correspond to an integrated luminosity of 139 fb$^{-1}$. In this poster, the four top-quark decay final states containing either a pair of same-sign leptons or multi-lepton (SSML) are considered. To enhance the search sensitivity, a mass-parameterized BDT is introduced to discriminate the BSM signal against the irreducible SM four-top and other dominant SM backgrounds. Observed and expected upper bounds on the production cross-section of A/H are derived in the mass range from 400 GeV to 1000 GeV.
Since the discovery of neutrino oscillations due to their nonzero masses, these particles have been in the spotlight in the context of physics beyond the Standard Model. The left-right symmetric extension of the Standard Model can provide answers to many unsolved questions of the universe including parity violation of weak charged current, mass generation mechanism of neutrinos and their small values compared to other fermions, and matter/anti-matter asymmetry. We present the recent results of searches for left-right symmetric model through right-handed W and Z' production channels from CMS using the full Run-II dataset of pp collisions at a center-of-mass energy of 13TeV. The searches utilize the various kinematic features and final state objects from the target process, exploiting the full physics potential of the searches using boosted objects.
The poster presents the full run-2 results for the search for a heavy resonance decaying into a Z or W boson and a Standard Model Higgs boson (h), with the Z or W boson decaying into two leptons and the Higgs boson decaying into two b quarks. The search probed the reconstructed invariant or transverse mass distributions of the Zh and Wh candidates in the mass range from 220 GeV to 5 TeV. Upper limits at the 95% CL have been set on the gluon-gluon fusion production cross sections of a pseudoscalar Higgs boson (A) in the two-Higgs-doublet models and the Drell-Yan production cross sections of heavy vector bosons (Z' and W') in a heavy-vector-triplet model.
Several extensions of the Standard Model predict a second complex Higgs doublet. The corresponding additional scalars can exhibit flavour changing neutral currents, while in the alignment limit the SM Higgs properties are unaffected. This poster presents the results of a search for new scalar particles featuring flavour-violating couplings in the quark sector, in the multi-lepton and multi-b-jet final state. Various 2HDM signal production and decay modes are considered, including uncommon three-top and same-sign top final states. Events are categorised depending on the multiplicity of light leptons (electron and muon) and a DNN-based categorisation to enhance the purity of each 2HDM signal. The Monte Carlo predictions of the dominant background processes are corrected and validated using dedicated control and validation regions. Finally, dedicated DNN are trained to discriminate signal from background in each of the signal categories and are used as final discriminants in the fit.
Many extensions of the Standard Model predict the existence of long-lived particles leading to highly unconventional experimental signatures for which standard searches are not sensitive. In this poster we present a search for pairs of neutral long-lived particles decaying hadronically and giving rise to displaced jets. This analysis considers benchmark hidden sector models of neutral long-lived scalars with masses between 5 GeV and 475 GeV pair-produced by decays of mediators with masses between 60 GeV and 1000 GeV. A deep neural network is used to predict whether candidate jets were produced by a long-lived particle decay, SM jets, or beam-induced background, and an adversarial training is applied to minimize the impact of Monte Carlo mismodeling. The analysis uses the full Run 2 (2015-2018) data collected in pp collisions at 13 TeV with the ATLAS detector at the Large Hadron Collider. No significant excess is observed, and upper limits are set for these signal models.
A large number of physics models that extend the Standard Model predict the existence of new, massive, long-lived particles. Searches for these processes may target its decay products at a significant distance from the collision point. This signature provides interesting technical challenges due to their special reconstruction requirements as well as their unusual backgrounds. This poster will present recent results in long-lived SUSY searches using a displaced vertex in the Inner Detector with the ATLAS full Run 2 data.
The existence of magnetic monopoles is predicted by various theories of physics beyond the Standard Model. The introduction of magnetic monopoles can explain the electric charge quantization and restore the symmetry in Maxwell's equations with respect to magnetic and electric fields. Despite intense experimental searches, they remain unobserved to date.
The Large Hadron Collider (LHC) is achieving energies never reached before, opening possibilities for new physics including the discovery of exotic particles in the TeV mass range. We study the observability of virtual monopoles in the $\gamma \gamma$ channel at the LHC for monopole masses in the range 500-1000 GeV. More specifically, we consider the central exclusive production of photon pairs in both ultra-peripheral Pb-Pb and pp collisions.
A search for resonances in events with at least one isolated charged lepton ($e$ or $\mu$) is performed using 139 fb$^-1$ of $\sqrt{s}$ = 13 TeV proton--proton collision data recorded by the ATLAS detector at the LHC. Deviations from Standard Model predictions are tested in three- and four-body invariant mass distributions constructed from jets and leptons. The study reports first model-independent limits on generic resonances characterized by cascade decays of particles leading to multiple jets and leptons in the final state. The limits are calculated using Gaussian shapes with different widths. The multibody invariant masses are then used to set upper limits at a 95% confidence level on a range of new physics scenarios implemented in Monte Carlo simulations.
New particles beyond the standard model (SM) can affect the standard model processes by taking part in quark loops in the diagrams. In this poster, the recent CMS results on rare decays involving heavy quarks and leptons are discussed.
The result of search for non-resonant di-Higgs production in the $bbbb$ final state using full Run-2 dataset of proton-proton collisions at $\sqrt{s}$=13 TeV with the ATLAS detector is presented. The $bbbb$ final state is one of the most sensitive channels for measuring the Higgs self-coupling and di-Higgs production cross-section, thanks to the highest branching ratio. The analysis utilizes a novel neural network to estimate the large QCD backgrounds, and employs analysis categorizations to improve the sensitivity to di-Higgs production. This poster will present the analysis strategy and the latest result of the observed (expected) upper limits on the SM HH production cross-section and the constraint on the Higgs self-coupling at a 95% confidence in this analysis.
The most recent results on non-resonant Higgs bosons pairs production in the final state with two bottom quarks and two tau leptons will be presented. This final state has a sizeable branching fraction (7.3%) and the analysis benefit also from precise tau identification algorithms developed within the CMS collaboration. The analysis targets the gluon-gluon fusion and vector boson fusion production modes. 95% CL limits are set on SM production cross section, Higgs boson trilinear self-coupling and coupling of two Higgs bosons to two vector bosons. The sensitivity achieved by this search, performed with the full Run2 data set, is five times better than the one published using the LHC 2016 data set only. The improvement is determined by the larger statistics, the improved trigger strategy and by the use of Deep Neural Networks to perform objects selection and signal discrimination.
Top quarks and in general heavy quarks are likely messengers to new physics. The scrutiny of these particles' properties must be completed by the measurement of electroweak qqbar production at high energies, in particular for the top-quark. The International Linear Collider will offer favorable low-background environment of e+e- annihilation combined with a high-energy reach.
This talk will review the opportunities for precision measurements of the top and heavy quarks properties at the International Linear Collider, including the search for BSM contributions and CP violation in the top sector.
Based on 10 billion $J/\psi$ events accumulated by the BESIII detector, we show
searches for the rare process of $J/\psi$ weak decays. We also search for other rare
decay process, such as the FCNC process $D^0\to\pi^0\nu\bar{\nu}$, and the $J/\psi\to 4
leptons$. Using $J/\psi$ decay, BESIII also produce millions of Hyperson, which can be
used to search for the rare decay process $\Xi^-\to\Xi^0e\nu_e$.
Higgs boson pair (HH) production is a sensitive probe of the Higgs boson trilinear self-coupling and an opportunity to explore Beyond the Standard Model theories. This poster presents results on the search for non-resonant and resonant HH production in the final state with two b-jets and two tau-leptons. The analysis is performed using an integrated luminosity of 139 $\rm fb^{-1}$. The observed (expected) upper limits on the cross-section of non-resonant HH production is 4.7 (3.9) times the Standard Model and, for the resonant HH production, upper limits lie between 23 and 920 fb (12 and 840 fb).
Studies of Higgs boson pair production (HH) represent the next crucial step to constraining the Higgs sector and allow the chance to refine measurements of the Higgs boson self-coupling. While previous searches have focused on the HH production in the gluon-gluon and vector-boson fusion modes, this analysis documents a new search, with 139 $\rm fb^{-1}$ of pp collisions at $\sqrt{s}$ = 13 TeV collected by the ATLAS detector in LHC Run 2, for di-Higgs production in the VHH final-state. It searches for both resonant and non-resonant hh production, with only HH to bbbb considered for simplicity, in association with a leptonically decaying vector boson (W or Z). While this process has a lower cross-section than ggF and VBF HH production, it offers a clean final state with relatively small backgrounds, due to the presence of leptons. The analysis benefits from small backgrounds and attempts to set limits for the first time on VHH production. Analysis techniques and expected significance will be presented.
A search is made for a vector-like $T$ quark decaying into a Higgs boson and a top quark in 13 TeV proton-proton collisions using the ATLAS detector at the Large Hadron Collider with a data sample corresponding to an integrated luminosity of 139 fb$^{-1}$.
The all-hadronic decay modes $H \rightarrow b\bar{b}$ and $t \rightarrow bW \rightarrow bq\bar{q}'$ are reconstructed as large-radius jets and identified using tagging algorithms.
Improvements in background estimation, signal discrimination, and a larger data sample, contribute to an improvement in sensitivity over previous all-hadronic searches.
No significant excess is observed above the background, so limits are set on the production cross-section of a singlet $T$ quark at 95\% confidence level, depending on the mass, $m_{T}$, and coupling, $\kappa_{T}$, of the vector-like $T$ quark to Standard Model particles.
This search targets a mass range between 1.0 to 2.3 TeV, and a coupling value between 0.1 to 1.6, expanding the phase space of previous searches.
In the considered mass range, the upper limit on the allowed coupling values increases with $m_{T}$ from a minimum value of 0.35 for 1.07 $ < m_{T} < $ 1.4 TeV up to 1.6 for $m_{T} = 2.3$ TeV.
The dimuon decay of the Higgs boson is the most promising process for probing the Yukawa couplings to the second generation fermions at the Large Hadron Collider (LHC). In this poster, we present a search for this important process using the data corresponding to an integrated luminosity of 139 fb$^{-1}$ collected with the ATLAS detector in $pp$ collisions at $\sqrt{s} = 13 \mathrm{TeV}$ at the LHC. Events are divided into several regions using boosted decision trees to target different production modes of the Higgs boson. The measured signal strength (defined as the ratio of the observed signal yield to the one expected in the Standard Model) is $\mu = 1.2 \pm 0.6$. The observed (expected) significance over the background-only hypothesis for a Higgs boson with a mass of 125.09 GeV is 2.0$\sigma$ (1.7$\sigma$).
In the Standard Model (SM) the mass generation of fermions is implemented through Yukawa couplings to the Higgs boson. Experimental evidence exists for the Higgs boson couplings to second and third generation leptons through its decay to muon and tau pairs, but for quarks direct evidence exists only for the third-generation couplings, and direct searches for inclusive decays of the Higgs boson to lighter generations are challenging due to large QCD backgrounds at the LHC. With their distinct experimental signature, radiative decays of the Higgs boson to a meson and a photon, complemented by searches for analogous decays of the Z boson, offer an alternative probe of quark Yukawa couplings. Moreover, these decays provide an opportunity to investigate physics beyond-the-SM, as many such theories predict branching fractions significantly modified from the SM expectation. The rare decays of the Higgs boson in the charmonium sector, to a J/psi or psi(2S) state and a photon, provide an opportunity to access the not-yet observed charm-quark Yukawa coupling; the rare decays of the Higgs boson in the bottomonium sector, to an Upsilon(1S,2S,3S) state and a photon, can provide information about the real and imaginary parts of the bottom-quark Yukawa coupling, and are particularly sensitive to deviations from the SM. The corresponding Z boson decays to the same final state can provide a useful benchmark channel for the Higgs boson decays, but also offer an opportunity to test the QCD factorisation approach. Upper limits on branching ratios of the rare decays of the Higgs and Z boson to a vector quarkonium state and a photon were set at the ATLAS experiment using 36.1 $\rm fb^{-1}$ of ATLAS data at $\sqrt{s}$ = 13 TeV, corresponding to the 2015-2016 dataset. This poster will showcase the results of the latest search, which uses the full 139 $\rm fb^{-1}$ ATLAS dataset from 2015-2018. This search targets the quarkonium decays to dimuons and uses dedicated single photon plus muon triggers. With the increased statistics and new approach to modelling the resonant background, the limits on the branching ratios of each decay channel are improved by a factor of approximately two compared to the previous result. Combined limits are also set on the Higgs and Z boson decays to either J/psi or psi(2S) and a photon, and to any of the Upsilon(1S,2S,3S) states and a photon.
Final states with tau leptons are experimentally challenging but open up exciting opportunities for supersymmetry (SUSY) searches. SUSY models with light sleptons could offer a dark matter candidate consistent with the observed relic dark matter density due to accessible co-annihilation processes. Additionally, final states with hadronically decaying taus in Run-2 benefit from the increased available dataset and improved tau identification using machine learning algorithms. We present analyses that use the full Run 2 dataset of $\sqrt{s} = 13$ TeV proton-proton collision events recorded by ATLAS, which significantly extend existing limits on the electroweak production of supersymmetric particles in hadronic tau final states and extend the simplified models studied in these signatures.
Limited with the detection threshold, traditional dark matter searches are not sensitive to small mass WIMPs. To reduce the threshold, events with ionized electron signal only (S2-only) are selected. This talk will report the latest progress of S2-only search with PandaX-4T commissioning data. Another strategy is to search for cosmic ray boosted small mass WIMPs. Search result based on the newly proposed diurnal sidereal modulation signature will be also reported.
Two-particle differential correlators of particle numbers ($R_2$) and particle transverse momenta ($P_2$ and $G_2$), recently measured in Pb-Pb collisions, emerged as powerful tools to gain insights into particle production mechanisms and infer transport properties such as the ratio of shear viscosity to entropy density of the medium created in Pb-Pb collisions. In this talk, recent ALICE measurements of these correlators in pp collisions at $\sqrt{s}$ = 7 and 13 TeV and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV are presented to provide baseline references to measurements in Pb-Pb collisions and seek evidence, in particular, for viscous effects expected to arise in fluid-like systems produced in these collisions. Additionally, these measurements in small systems also probe particle correlations associated with jets as well as low-$p_{\rm T}$ processes and their change with system size. The strength and shape of the correlators are studied as a function of produced particle multiplicity to identify evidence for longitudinal broadening that might reveal the presence of viscous effects in these smaller systems. The measured correlators and their evolution from pp and p-Pb to Pb-Pb are additionally compared to predictions from Monte Carlo models, and the potential presence of viscous effects is discussed.
Tokai to Kamioka (T2K) is an accelerator long baseline experiment that measures the neutrino oscillation parameters by observing $\nu_\mu$ ($\bar{\nu}_\mu$) disappearance and $\nu_e$ ($\bar{\nu}_e$) appearance from a $\nu_\mu$($\bar{\nu}_\mu$) beam. The experiment has both near and far detectors situated at 280 m and 295 km respectively from the beam production target. The far detector Super-Kamiokande (SK) where $\nu$ and $\bar{\nu}$ interact is a water Cherenkov detector. The dominant interactions at $\sim0.6$ GeV where T2K flux peaks are charged current quasi--elastic (CCQE) which result in single ring events. The next largest CC interaction at T2K energy is resonant 1$\pi$ production where the events will have multi--ring topology. The addition of CC $\nu_\mu1\pi^+$ samples to the T2K analysis is expected to improve the precision on $\sin^2\theta_{23}$ and $|\Delta{m^2}_{32}|$. Studies on the selection of CC $1\pi^+$ like events accumulated in forward horn current (FHC) operation are performed for $\nu_\mu$ samples. The estimation of systematic uncertainty is important in the studies of sensitivity to neutrino oscillation parameters. One source of uncertainty is the impact of shortcomings in the detector model on the event selection. In our study, far detector systematic uncertainty is estimated via a fit to atmospheric neutrinos events collected in SK, using a Marko Chain Monte Carlo Framework. We present the selection of $\nu_\mu$CC1$\pi^+$ multi--ring samples and the process of estimation of detector systematic uncertainty,including these samples.
In this paper, we present detailed studies for measuring the production cross sections and setting model independent limits on the anomalous magnetic and electric dipole moments $\tilde{a}_\tau$ and $\tilde{d}_\tau$ of the $\tau$-lepton, through the tau pair production channels $pp \to p\tau\bar \tau \gamma p$, $e^-p \to e^- \tau\bar \tau \gamma p$ and $e^+e^- \to e^+\tau\bar \tau \gamma e^-$ via the $\gamma^*\gamma^* \to \tau^+\tau^-\gamma$ subprocess. Measurements of the anomalous electromagnetic couplings of the tau-lepton provide an excellent opportunity to probe extensions of the Standard Model. We found that of the three colliders considered LHC, FCC-he and CLIC, the prediction of the future CLIC at high energy and high luminosity should provide the best sensitivity on the dipole moments of the $\tau$-lepton $\tilde a_\tau=[-0.00128, 0.00105]$ and $ |\tilde{d}_\tau({\rm ecm})|= 6.4394\times 10^{-18}$ at the $95\%$ Confidence Level.
Super-Kamiokande is a 50 ktons water Cherenkov Detector in Japan and has been operating from April 1996 thus accumulated 0.37 megaton-years exposure of data. One of the main physics topics of Super-Kamiokande experiment is searching for proton decay to test Grand Unified Theory. One of the three-body proton decay modes, charged lepton and two pion decay mode can be considered in a model-independent manner and its expected decay rate is 25% ~ 140% in comparison with $pโe^+ ฯ^0$. Super-Kamiokande detector can detect all final particles above Cherenkov threshold therefore proton mass and momentum could be reconstructed and most of atmospheric neutrino backgrounds could be rejected in fiducial volume as 2m inside the wall of inner detector. This analysis is first to search for nucleons decaying directly to a lepton and multiple neutral pions in Super-Kamiokande. In this poster, sensitivity studies of the three-body proton decay using Monte Carlo, especially via $pโe^+ ฯ^0 ฯ^0$ and $pโฮผ^+ ฯ^0 ฯ^0$ decay modes will be presented.
Heavy Neutral Leptons (HNLs) have been an interesting topic for experimental particle physics in the past few years.ย
A study has been performed within the framework of the multi-instrument DUNE near detector complex, specifically regarding the SAND muon tracker on-axis detector, to assess the sensitivity to HNL within six years of exposure.
The meson flux has been generated using Pythia8, focusing on charmed heavy mesons to explore HNL masses between 0.3 to 1.7 GeV/c2. ย A MadGraph/MadDump model has been implemented based on the nuMSM lagrangian, and used to obtain accurate kinematics for the decay of mesons and HNL.
The simulated final-state particles were then propagated through the detector simulation and a track reconstruction algorithm, based on the Kalman Filter technique, along with a simple two-body decay selection, were implemented to estimate efficiency and background rejection.ย
The HNL sensitivity is estimated both from the purely phenomenological and experimental point of view, reaching O(10^{-9}) for higher HNL masses, with about a factor 3 deterioration between the phenomenological and the experimental case. In this poster, I will present the configuration and results of these studies, and discuss potential further improvements.
In this paper, we present a QCD analysis to extract the Fragmentation Functions (FFs) of unidentified light charged hadron entitled as SHK22.h from high-energy lepton-lepton annihilation and lepton-hadron scattering data sets. This analysis includes the data from all available single inclusive electron-positron annihilation (SIA) processes and semi-inclusive deep-inelastic scattering (SIDIS) measurements for the unidentified light charged hadron productions. The SIDIS data which has been measured by the COMPASS experiment could allow the flavor dependence of the FFs to be well constrained. We exploit the analytic derivative of the Neural Network (NN) for the parametrization of FFs at next-to-leading-order (NLO) accuracy in the perturbative QCD (pQCD). The Monte Carlo method is implied for all sources of experimental uncertainties and the Parton distribution functions (PDFs) as well. Very good agreements are achieved between the SHK22.h FFs set and the most recent QCD fits available in literature, namely JAM20 and NNFF1.1h. In addition, we discuss the impact arising from the inclusion of SIDIS data on the extracted light-charged hadron FFs. The global QCD resulting at NLO for charged hadron FFs provides valuable insights for applications in the present and future high-energy measurement of charged hadron final state processes.
The future of high energy physics relies on the capability of exploring a broader energy range than current colliders, with higher statistics. The Muon Collider thus provides a unique possibility for combining these two aspects: as a leptonic machine it allows to take advantage of the nominal center of mass energy in the interaction. Moreover, the losses due to synchrotron radiation are negligible with respect to the electron machines, thanks to the muon mass about 200 times heavier than the electron one. For these reasons, studies aimed at designing a muon collider able to reach 10 TeV or higher center of mass energies with luminosity higher than 10$^{34}$ cm$^{-2}$s$^{-1}$ are currently ongoing. These operational conditions open to an unprecedented physics program, which ranges from high precision Higgs boson studies to Beyond Standard Model (BSM) searches. To mention only a few examples, theoretical studies demonstrate that the direct reach of muon colliders generically exceeds the sensitivity of the High-Luminosity LHC (HL-LHC) when considering several BSM states as Composite Higgs fermionic top-partners T and supersymmetric particles such as stops, chargino, stau leptons e and squarks. Moreover, the muon collider reach exceeds the one of the FCC-hh for several BSM candidates, especially for purely electroweak charged states. In addition, Dark Matter can also be studied at muon colliders in several channels exploiting for example the disappearing tracks produced by charged particles involved in the process.
The interesting possibility in terms of physics reach comes however with not-negligible technological challenges, first of all the ability to produce collimated beams of unstable particles, the muons, for a period long enough to allow high luminosity collisions. From the detector point of view instead, the main challenge is related to the so-called Beam Induced Background (BIB): the muons decay and, together with their decay products, interact with the beam pipe and the surrounding material, producing a huge secondary particle flux, in which the detector must operate. FLUKA simulations show that, at the center of mass energy of $\sqrt{s}$=1.5 TeV, the BIB is mainly composed of low energetic neutrons, photons and electrons/positrons. They deposit energy in a diffused way in all the detector volume, with also a relevant spread in their arrival time with respect to the bunch crossing, thanks to their different velocities. All these characteristics must then be taken into account for a proper detector design.
The existing simulation framework is based on the iLCSoft framework, previously adopted by the CLIC Collaboration and updated for the developments of the Muon Collider. The current configuration foresees a tracking system based on multiple layers of silicon detectors, followed by the electromagnetic and hadronic calorimeters. These three components are contained within a solenoidal magnet, which provides a field of 3.57 T. Out of the solenoid, the muon system extends, based on multiple layers of gaseous detectors, both in the barrel and in the endcap regions.
The purpose of this contribution is to describe the expected performance of a multi-purpose muon collider detector designed to reconstruct the products of collisions at $\sqrt{s}$=1.5 TeV with extreme accuracy. The results presented will include the contribution coming from the BIB particle, in order to face the proper operational condition of the detector.
The main objective of this contribution will be the performance in terms of different objects reconstruction in the various regions of the detectors. Starting from track reconstruction, we will give an overview of the different approaches studied at the muon collider detector, which include a Conformal Tracking (CT) algorithm and a Combinatorial Kalman Filter (CKF) algorithm. The result presented will prove that a robust track reconstruction for charged particles above 1 GeV throughout the detector acceptance can be achieved.
Results of the jets reconstruction, based on a particle-flow approach and a kT-based clustering, will be discussed: reconstruction efficiency, evaluated on samples of light, bโ and cโjets, ranging from 82% at p$_T$ โ 20 GeV to 95% at higher p$_T$ will be presented. The jet energy resolution ranges instead from about 50% to about 15%, depending on p$_T$, with significant improvement expected from the usage of more advanced algorithms. Reconstruction algorithms dedicated to electrons and photons and able to cope with the BIB conditions have been developed as well, resulting in a successful reconstruction of high-p$_T$ electrons and photons with relatively small loss of efficiency and energy resolution. Finally, the muon reconstruction algorithm, which combines the information coming from the hits in the muon system with the reconstructed hits in the tracker, will be discussed: it leads to a reconstruction efficiency in presence of BIB greater than 90% over an extended energy range.
Whenever possible, requirements in terms of detector performance arising from the simulation results presented will also be introduced, together with the proposed technological solutions.
The Standard Model theoretical prediction of the muon anomalous magnetic moment, $a_\mu = (g-2)_\mu /2$, presents a discrepancy of $4.2\sigma$ with respect to the combined Fermilab and BNL measurements.
The MUonE project is a recently proposed experiment at CERN that will help to shed light on this situation, by providing an independent determination of the leading order hadronic vacuum polarisation (HLO) contribution, which dominates the theoretical uncertainty on $a_\mu$, through the study of elastic muon-electron scattering at small momentum transfer. In order to achieve an accuracy similar to the one of existing determinations of $a_\mu^{\rm HLO}$, the projected experimental precision at MUonE is of the order of $10$ppm. This precision level has to be reached also by the theoretical calculations, by considering all possible radiative corrections as well as all processes that can constitute a background to the experimental signal.
In this talk, the analysis of a potential source of reducible background at MUonE, coming from the $\pi^0$ production in muon-electron scattering, i.e. $\mu^\pm e \rightarrow \mu^\pm e \pi^0$, is presented. This kind of study is motivated by the fact that the $\pi^0$ production is dynamically enhanced in the region of small electron and muon scattering angles, which is particularly interesting for MUonE. Moreover, the effects of this same process as a background to possible New Physics searches at MUonE are analysed, in phase-space regions complementary to the elastic-scattering ones, where one can study processes such as the production of a light new gauge boson $Z'$ via the process $\mu^\pm e \rightarrow \mu^\pm e Z'$ or of a dark photon through the process $\mu^\pm e \rightarrow \mu^\pm e A'$.
Single-differential cross section predictions for top quark pair production are presented at NLO, using running top quark mass renormalization schemes. The evolution of the mass of the top quark is performed in the MSR scheme as $m_t^{\textrm{MSR}}(\mu)$ at renormalization scales $\mu$ below the $\overline{\textrm{MS}}$ top quark mass $\overline{m}_t(\overline{m}_t)$, and in the $\overline{\textrm{MS}}$ scheme as $\overline{m}_t(\mu)$ at scales above. In particular, the implementation of a mass renormalization scale independent of the QCD renormalization and factorization scales allows investigating independent dynamical scale variations.
In the Standard Model (SM), the $b \to s$ and $b \to d$ flavor-changing neutral currents (FCNC), being loop-induced, are standard experimental channels for testing the SM precisely and searching for possible physics beyond the SM. Purely annihilation decays of $B$-mesons are of significant interest as in the SM they are extremely suppressed and New Physics effects can increase substantially their decay widths. Radiative and semileptonic decays with the $\phi$-meson production, being a subject of experimental searches at the LHC and KEKB, are typical examples of annihilation-type processes. One of the well-known experimental results on these decays is the upper limit on the radiative decay,${\cal B} (B^0 \to \phi \gamma) < 10^{-7}$, obtained by the Belle collaboration in 2016 [Z. King, et al., Belle Collab., Phys. Rev. D. (2016) 93]. Early this year, the LHCb collaboration obtained the upper limit on its semileptonic counterpart, ${\cal B} (B^0 \to \phi \mu^+ \mu^-) < 3.2 \times 10^{-9}$ [R. Aaij, et al., LHCb Collab., arxiv:2201.10167]. Here, we consider the annihilation-type semileptonic $B^0 \to \phi \ell^+ \ell^-$ decay, where $\ell$ is a charged lepton, and present SM theoretical predictions for the branching fraction based on the effective electroweak Hamiltonian approach for the $b \to d \ell^+ \ell^-$ transitions.
Collider experiments allow us to probe the spin state of fundamental particles in addition to their kinematics. Top quarks are unique candidates for spin polarization and spin correlation measurements and can be used for precision tests of the Standard Model.
Quantum information observables, like measures of entanglement, provide an additional handle to probe spin correlations. Entanglement can be heavily influenced by new physics, with $\mathcal O$(20%) deviations from the SM in scenarios not yet excluded by other measurements.
A quantum system with large enough entanglement can violate Bell inequalities. Top quark pairs can be used for this scope allowing a test of quantum mechanics at the TeV scale. Additionally, in the phase space region used for the detection of a violation of Bell inequalities, higher-dimensional effective operators are expected to become more relevant, enhancing the sensitivity of this measurement to BSM physics.
In this poster I will first present prospects of observing these quantum effects at the LHC within the Standard Model. Then I will show searches of new physics in the context of the Standard Model Effective Field Theory focussing on both measures of entanglement and spin correlations. I will present NLO-accurate numerical simulations obtained for the first time, and demonstrate how their inclusion in global SMEFT fits in the top sector can improve existing bounds on higher-dimension operators.
A long-standing discrepancy in the soft photon bremsstrahlung has attracted a renewed attention in view of the proposed measurements with a future upgrade of the ALICE detector in the upcoming runs of the LHC. In this talk I will discuss the possibility to implement techniques that have been recently developed for soft gluon resummation at Next-to-Leading-Power (NLP) to the soft photon spectrum.
Dark matter captured by interaction with electrons inside the Sun may annihilate via long-lived mediators to produce observable gamma-ray signals. We utilize solar gamma-ray flux measurements from the Fermi Large Area Telescope and High Altitude Water Cherenkov observatory to put bounds on the dark matter electron scattering cross-section. We find that our limits are four to six orders of magnitude stronger than the existing limits for dark matter masses ranging between GeV to PeV scale.
Neutrino Elastic scattering Observation with NaI (NEON) is an experiment to detect a coherent elastic neutrino-nucleus scattering (CEvNS) using reactor electron antineutrinos. NEON is based on an array of six NaI(Tl) crystals corresponding to a total mass of 15 kg, located at the tendon gallery of the Hanbit nuclear reactor that is 24 m far from the reactor core. The installation of the NEON detector was completed in December 2020 and the detector is currently taking data with full power of the reactor since May 2021. The current status of the NEON experiment will be presented in this poster.
The increased radiation environment and data rate for the High Luminosity Large Hadron Collider (HL-LHC) require upgrades to the readout electronics for the Muon Spectrometer ( MS ) electronics. In this talk, I will present the status of the irradiation studies for the chamber service module (CSM). The CSM is a custom-built front-end electronics board and is responsible for multiplexing data read out from on-detector electronics as well as passing configuration information to them. An important component of the CSM is a Field-programmable gate array (FPGA), specifically using the FPGA Artix7 xc7a35T, is responsible for fanout of configuration and control information for 18 mezzanine cards. The Artix-7 is a commercial component with a history of meeting our radiation specifications. The specific model used in the CSM was first tested for the Single Event Effects (SEE) at LANSCE. The model was tested in a radiation hard environment with an average flux 10^3 higher than ATLAS (6.02E+3 n/cm2/s vs 1.3E+6 n/cm2/s). Results show that the SEE test approximately had 3 years of ATLAS in comparison with โฝ1.9E+11n/cm2/y fluence (MDT CSM Requirement). One CSM board on average has approximately 9 errors in 3 years of ATLAS Run. The CSM was next tested for the Total Ionization Dose (TID) at BNL. The model was tested at a dose rate 7.675 kRad/hr. Results show that the total dose of all four irradiated test boards exceeded more than 3x the ATLAS RTC requirement of 10kRad dose.
The origin of neutrino masses remains shrouded in mystery. One of the possible scenarios is that neutrinos have Majorana masses, which leads to neutrinoless double-beta decay ($0\nu\beta\beta$). CANDLES is a project to search for the $0\nu\beta\beta$ events from ${}^{48}\mathrm{Ca}$, which has a relatively high Q$_{\beta\beta}$-value of $4.27\,$MeV among the known double beta decay nuclei. We developed a CANDLES-III system with 96 $\mathrm{CaF}_2$ scintillation crystals with natural Ca isotope, which corresponds to $350\,$g of ${}^{48}\mathrm{Ca}$, and took data with about $652$ days of observation from $2016$. We are preparing a method to reduce $\beta$-decay background events to increase the sensitivity to the signal.
In this talk, a development of the method for background reduction and the latest status of the search for the $0\nu\beta\beta$ will be reported.
This talk discusses recent developments concerning the MiniBooNE anomalyโan excess of low energy electronlike events in Fermilab's Booster Neutrino Beam. The latest results from the MicroBooNE collaboration disfavor an enhancement of low-energy electron neutrino interactions as the entire source of the MiniBooNE excess. However, a joint fit by the MiniBooNE collaboration, presented here, suggests that there are still regions of sterile neutrino parameter space consistent with both experiments. Similar conclusions have been reached by other studies. That being said, the vanilla 3+1 sterile neutrino model is unable to explain the MiniBooNE excess at the lowest energies and scattering angles. This motivates the consideration of more exotic models that can explain the entirety of the excess. In this talk, we explore a model introducing a MeV-scale dipole-coupled neutral lepton alongside the typical eV-scale mixing-coupled sterile neutrino. The preferred regions of dipole parameter space with respect to the MiniBooNE excess are discussed, as well as constraints from existing MINERvA results.
One of the very interesting aspects of high energy heavy-ion collisions experiments is a detailed study of the thermodynamical properties of strongly interacting nuclear matter away from the nuclear ground state and many efforts were focused on searching for possible phase transitions in such collisions. In this investigation, we are going to explore the presence of thermodynamic instabilities and the realization of a pure hadronic phase transition at finite temperature and baryon density nuclear matter. The analysis is performed by means of an effective relativistic mean-field model with the inclusion of hyperons, $\Delta$-isobars, and the lightest pseudoscalar and vector meson degrees of freedom. The Gibbs conditions on the global conservation of baryon number and zero net strangeness in symmetric nuclear matter are required. In this context, a phase transition characterized by both mechanical instability (fluctuations on the baryon density) and by chemical-diffusive instability (fluctuations on the strangeness concentration) in asymmetric nuclear matter can take place. In analogy with the liquid-gas nuclear phase transition, hadronic phases with different values of antibaryon-baryon ratios and strangeness content may coexist during the mixed phase. Such a physical regime could be in principle investigated in the high-energy compressed nuclear matter experiments where it is possible to create compressed baryonic matter with a high net baryon density.
The current era of Exascale computing brings ever growing demands on the amount of available computing performance, storage capacity and network throughput. This also affects the massive computing infrastructure for management of data produced by the experiments at the LHC, the Worldwide LHC Computing Grid (WLCG). The standard financing used for many years enabling the resource growth of 10 - 20% is no longer sufficient and to close the resource gap different methods are pursued. The sites involved in the WLCG are encouraged to find non-grid external resources to be used for WLCG
tasks. Probably the most important among them are High Performance Computing (HPC) Centers.
In this contribution, we present an overview of one of the WLCG sites, the distributed Tier-2 center in Prague, the Czech Republic. It is a standard example of a WLCG medium size Tier-2 center concerning the hardware resources, site management and the network connections within the WLCG, so a general picture of a WLCG Tier-2 site is provided. In addition, our site complies with the current trends supported by the WLCG. First it is the use
of resources of the external national HPC center in Ostrava and second providing resources not only for the LHC experiments but also other particle and astro-particle experiments. This way we follow the recently adopted strategy towards a sustainable and shared infrastructure adapted to the needs of large Exascale science projects. In addition, we make use of BOINC which enables additional external contributions to our resources.
We calculate for the first time the total decay widths of the charmed baryons including all the possible open-flavor decay channels using the 3P0 model. Our calculations consider the final states: the charmed baryon-(vector/pseudoscalar) meson pairs and the (octet/decuplet) baryon-(pseudoscalar/vector) charmed meson pairs, within a constituent quark model. Furthermore, we calculate the masses of the charmed baryons ground states and their excitations up to the D-wave states. The charmed baryon masses are calculated likewise in a constituent quark model both in the three-quark and in quark-diquark schemes, utilizing a Hamiltonian model based on a harmonic oscillator potential plus a mass splitting term that encodes the spin, spin-orbit, isospin, and flavor interactions. The parameters of the Hamiltonian model are fitted to experimental data of charmed baryon masses and decay widths. The experimental uncertainties of the data affect the fitted model parameters, hence we thoroughly propagated these uncertainties into our predicted charmed baryons masses and decay widths via a Monte Carlo bootstrap approach, which is often absent in other theoretical studies on this subject. Our quantum number assignments and mass and strong partial decay widths predictions are in reasonable agreement with the available data and thus our results show the ability to guide future measurements at LHCb and Belle (II) experiments.
Hadronic atoms allow the investigation of strong hadron-nucleon interaction at low energy in nuclear physics. High precision light kaonic atoms X-ray spectroscopy represents a unique tool for performing experiments equivalent to scattering at vanishing relative energies. It aims to determine the antikaon-nucleus interaction at the threshold without the need for extrapolation to zero energy.
The SIDDHARTA-2 collaboration is going to perform the first measurement of kaonic deuterium transitions to the fundamental level. The measurement is mandatory to extract the isospin dependent antikaon-nucleon scattering lengths. The SIDDHARTA-2 experiment is presently installed on the DA$\Phi$NE collider of INFN-LNF. The preliminary results obtained during the machine commissioning phase in preparation for the kaonic deuterium data-taking campaign, and future perspectives for extreme precision kaonic atoms studies at DA$\Phi$NE are presented.
Antideutrons have never been observed in space. This presentation reviews studies of antideuterons using Alpha Magnetic Spectrometer on the International Space Station in the rigidity range from 1 to 10 GV.
The standard gas mixture for the Resistive Plate Chambers (RPC), composed of $C_{2}H_{2}F_{4}$/i- $C_{4}H_{10}$/$SF_{6}$, has a high Global Warming Potential (GWP โผ1430) mainly due to the presence of $C_{2}H_{2}F_{4}$. This gas is not recommended for industrial uses anymore, therefore it will be problematic to use it in the next future. We report the performance of the RPC working with new environment- friendly gases which could replace the standard mixture. The new gaseous components have the Global Warming Potential (GWP) at very low level. In this work the standard mixture main component, the $C_{2}H_{2}F_{4}$ (GWPโผ1300), is replaced by a proper mixture of $CO_{2}$ (GWP = 1) and Tetrafluoropropene ($C_{3}H_{2}F_{4}$, GWPโผ6). The other high-GWP component, the $SF_{6}$ (GWP โผ 23900), is replaced by a new molecule, the Chloro-Trifluoropropene ($C_{3}H_{2}ClF_{3}$, GWP โผ 5) never tested in the RPC detectors. The mixtures studied have a total GWP โผ 10. We report, for several eco-gas mixtures, the detection efficiency, streamer probability, electronic and ionic charge as a function of the high voltage. Moreover the timing properties are studied and the detector time resolution is measured. We also focus the attention on a new category of signals having intermediate properties between avalanche and streamer, called โtransition eventsโ. This category is negligible for the standard gas mixture but relevant for HFO based gas mixtures. We show a direct comparison between SF6 and C3H2ClF3 to study in depth the possibility to replace an industrially very important molecule like $SF_{6}$.
The High-Luminosity Large Hadron Collider (HL-LHC) is expected to deliver an integrated luminosity of up to 3000 fb-1 at sqrt{s} = 14 TeV. The very high instantaneous luminosity will lead to about 200 proton-proton collisions per bunch crossing (โpileupโ) superimposed to each event of interest, therefore providing extremely challenging experimental conditions. CMS prospects on study of the HH production at the HL-LHC are presented.
Liquid scintillator is widely used as a medium for the detection of charged particles for numerous applications in science, medicine, and other areas. The composition of scintillator affects not only its performance, but also the cost of the components. The spectrum of the output scintillator light also affects what detectors can be used in conjuncture with this scintillator formula. Optimization of this composition provides the ability to design particle detectors with a certain light yield and emission spectra of the detection medium or maximize the light yield while optimizing the expenses. This work presents the component optimization for the toluene-based liquid scintillator that uses PPO as a fluor and POPOP as a secondary shifter. The light yield vs concentration and the changes in the output spectra will be presented. Future plans include the light attenuation measurements.
The Extreme Universe Space Observatory Super Pressure Balloon 2 (EUSO-SPB2) is an approved NASA balloon mission that is planned to fly in 2023 from Wanaka, NZ with target duration of up to 100 days. It is a pathfinder for the Probe of Extreme Multi-Messenger Astrophysics (POEMMA), a candidate for an Astrophysics probe-class mission. EUSO-SPB2 will consist of a Cherenkov telescope and a fluorescence telescope. The first is optimized for fast signals and is devoted to estimate the background sources for astrophysical neutrino observations; the second looks at the nadir to measure the fluorescence emission of Ultra High Energy Cosmic Rays (UHECRs). The long-duration flight will provide a large number of VHECR Cherenkov signal and UHECR fluorescence tracks. In this paper, we discuss the calibration with dedicated signal of the photodetection module and the correlation to simulation studies of the camera response.
The bottom heavy baryons are studied in the framework of a nonrelativistic quark model. We use the Hypercentral approach to solve the six-dimentional Schrรถdinger equation of the baryons. Introducing a potential model, the ground state masses and magnetic moments of the $\Sigma_b$ , $\Lambda_b$ , $\Xi_{bc}$ and $\Xi_{bb}$ heavy baryons are calculated. We also investigate the $ b \rightarrow c $ semileptonic decay widths of the bottom baryons. Finally, the branching fractions are calculated. Our results are in agreement with the available experimental data and those of other works.
Hadronization is a non-perturbative process, which theoretical description can not be deduced from first principles. Modeling hadron formation requires several assumptions and various phenomenological approaches. Utilizing state-of-the-art Computer Vision and Deep Learning algorithms, it is eventually possible to train neural networks to learn non-linear and non-perturbative features of the physical processes.
Here, I would like to present the latest results of two deep neural networks, by investigating global and kinematical quantities, indeed jet- and event-shape variables. The widely used Lund string fragmentation model is applied as a baseline in โs=7 TeV proton-proton collisions to predict the most relevant observables at further LHC energies. Non-liear QCD scaling properties were also identified and validated by experimental data.
[1] G. Bรญrรณ, B. Tankรณ-Bartalis, G.G. Barnafรถldi; arXiv:2111.15655
The superweak (SW) force is a minimal, anomaly-free U(1) extension of the standard model (SM), designed to explain the origin of (i) neutrino masses and mixing matrix elements, (ii) dark matter, (iii) cosmic inflation, (iv) stabilisation of the electroweak vacuum and (v) leptogenesis. In this talk we discuss how the parameter space of the model is constrained by providing viable scenarios for the first four of this list. The talk is intended to give a summary of the findings published the following research articles on the arXiv: 1812.11189, 1911.07082, 2104.11248, 2104.14571, 2105.13360, 2204.07100.
The Super-Kamiokande experiment (SK) is the water Cherenkov detector which discovered the oscillation of atmospheric neutrinos. The dominant effect of the oscillation of muon neutrinos is the appearance of tau neutrinos. Direct detection of $\nu_\tau$ in the atmospheric neutrino flux provides an unambiguous confirmation of neutrino oscillations. $\nu_\mu$ changing to $\nu_e$ is the sub-dominant $\nu_\mu$ oscillation mode, which is studied at SK to determine mass hierarchy. Currently, $\nu_\tau$ interactions form the biggest background to the mass hierarchy signal in the SK analysis. SK uses machine learning techniques of neural networks to segregate $\nu_\tau$ charged-current interactions from the interactions of the atmospheric muon and electron neutrinos. This poster will discuss improvements in the $\nu_\tau$ identification algorithm and discuss corresponding improvements in the search for tau neutrinos and the suppression of mass heirarchy backgrounds.
In order to cope with the occupancy and radiation doses expected at the High-Luminosity LHC, the ATLAS experiment will replace its Inner Detector with an all-silicon Inner Tracker (ITk), containing pixel and strip subsystems. The strip subsystem will be built from modules, consisting of one n+-in-p silicon sensor, one or two PCB hybrids containing the front-end electronics, and one powerboard with high voltage, low voltage, and monitoring electronics. The sensors in the central region of the detector will use a simple rectangular geometry, while those in the forward region will use a radial geometry with built-in stereo angle. To validate the expected performance of the ITk strip detector, a series of testbeam campaigns has been performed over several years at the DESY-II testbeam facility. Tracking was provided by EUDET telescopes, consisting of six Mimosa26 pixel planes. An additional pixel or strip plane was used to improve the timing resolution of the telescope. Tracks are reconstructed using the General Broken Lines algorithm, resulting in a spatial resolution of several microns. In the year 2021 the focus of test beam campaigns has been on assessing the module performance post-irradiation, using the final production versions of the sensors and front-end electronics. Three modules were built from irradiated components, including the first "split" R5 module containing two sensors to be tested at testbeam. Measurements were performed of the charge collection, signal efficiency, and noise occupancy of the modules, as well as of the tracking performance in various sensor regions. The results give confidence in the operability of the detector across its lifetime.
The BREAD Experiment [1] aims to use novel ultralow noise photosensors for detecting axions. The earliest stages of this experiment, expected to take first data in 2023, will involve a superconducting nanowire single-photon detector (SNSPD) to run pilot axion and dark photon searches using an existing cryostat previously used by ADMX. In preparation for this, we are working with the Berggren group at MIT on testing infrared optimized SNSPD sensors at 500mK at Fermilab to characterize the response over broadband frequency ranges, the angle dependence, and the polarization response of the sensors to support the BREAD conceptual design. We propose SNSPDs for this experiment given their very low dark count rates of better than 10^{-4} Hz [2], a measurement which we will also attempt to replicate. This talk presents the status and future plans of this work, including upgrading the experimental setup to test other state-of-the-art quantum photosensor technologies that are optimized to a wide range of frequencies in order to maximize the sensitivity to new physics signals.
[1] https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.128.131801
[2] https://arxiv.org/abs/1903.05101
Researchers at IHEP have conceived two types of MCP-PMTs for the photon detection in the particle physics. One is the 20 inch Large MCP-PMT (LPMT) with small MCP units in the large area PMTs for the neutrino detection. This LPMT has already mass produced more 13K pieces for the JUNO, and also evaluated by the PMT group in LHAASO and HyperK. Another is the 2 inch Fast MCP-PMT (FPMT) with the fast timing resolution for the particle identification in the collider detector. The FPMT prototypes have produced with 50 ps time resolution, and also the 8X8 readout anode for the position resolution. This talk will introduce all the three types of MCP-PMT and their performance tested in the lab.
After successfully completing the phase-I upgrades during the long-shutdown 2 of LHC, the ATLAS detector is now ready to take Run3 collision data, with several upgrades implemented. The most important and challenging being in the Muon Spectrometer, where the two forward inner muon stations have been replaced with the New Small Wheels (NSW) equipped with two completely new detector technologies: the small strips Thin Gap Chambers (sTGC) and the Micromegas (MM).
Following the enormous effort for the construction, commissioning and installation of the NSW, the muon software required extensive revisions and new implementations, as well as migration to new multi thread approach. The new detectors have been fully integrated into the software. The detectors response is simulated and compared with real data from cosmic rayโs test benches and test-beams. Nominal geometries, misalignments, and deformations, as well as other possible deviations from nominal operating conditions resulted from the detectors validation studies, have been implemented for a realistic study of final performances.
The simulation of both sTGC and MM trigger was implemented, and performance evaluated in different configurations, with and without background, serving as a crucial input for the optimization and hardware implementation of the trigger logic.
Full muon reconstruction performance studies are performed and all the software tools, including dedicated data format, are now ready for early data-taking detector commissioning, and for physics analyses. After an overview of the software implementation and the adopted strategies for simulations and reconstruction, a summary of the studies carried out will be presented.
Many analyses in the ATLAS physics program are dependent on the identification of jets containing b-hadrons (b-tagging). The corresponding algorithms are referred to as b-taggers. The baseline b-taggers are optimized for jets containing one b-hadron. A new double b-tagging algorithm, the X->bb tagger, provides better identification efficiency to reconstruct boosted resonant particles decaying into a pair of b-quarks. In the boosted regime, it is a challenging task because of high collimation of the two b-hadrons. This neural network based X->bb tagger uses the kinematic information of the large radius (R=1.0) jet and the flavour information of associated track-jets. The performance of this tagger was evaluated using Monte Carlo simulation, therefore it could vary in collision data. Thus this poster presents the in situ tagging efficiency calibration using Z->bb events with a recoiling photon or jet for this boosted X->bb tagger. The efficiency data to simulation scale factor is derived using the Run 2 pp collision data collected by ATLAS experiment at sqrt{s} = 13 TeV, with the integrated luminosity of 139 fb^-1.
The muon system of the CMS detector at CERN plays an important role for many searches of the physics phenomena within and beyond the standard model, in particular the Higgs boson discovery and observation of the Bs0 and B0 muon decays. The next phase of high luminosity LHC (HL-LHC) foresee and increase of the instantaneous luminosity in order to extend the discovery potential of the detector. In order to meat the increased particle rates and to ensure a robust and redundant system CMS is adding new detector layers in the forward region of the muon system. The endcap regions will be equipped with Gas Electron Multiplier (GEM) detectors and improved Resistive Plate Chambers (iRPC). The first of three GEM detector systems (called GE1/1) has been already installed and will operate during Run 3 of LHC starting this year. The alignment of the new detector is mandatory for correct muon transverse momentum assignment, thus for muon triggering and reconstruction. We report the status of a newly developed back-propagation method for GEM alignment to reduce the muon momentum dependence due to the multiple scatterings, compared to the standard alignment technique using muon tracks in the CMS tracker system. This new method significantly improves the relative GEM-CSC system alignment.
The High Luminosity Large Hadron Collider (HL-LHC) at CERN is expected to collide protons at a centre-of-mass energy of 14 TeV and to reach the unprecedented peak instantaneous luminosity of 7 x 10^34 cm^-2 s^-1 with an average number of pileup events of 200. This will allow the ATLAS and CMS experiments to collect integrated luminosities up to 4000 fb^-1 during the project lifetime. To cope with this extreme scenario the CMS detector will be substantially upgraded before starting the HL-LHC, a plan known as CMS Phase-2 upgrade. The entire CMS silicon pixel detector (IT) will be replaced and the new detector will feature increased radiation hardness, higher granularity and capability to handle higher data rate and longer trigger latency. The upgraded IT will be composed of a barrel part, TBPX, and small and large forward disks, TFPX and TEPX. The TEPX detector has four large disks on each side, extending the coverage up to |eta|=4.0. In TEPX the modules are arranged in five concentric rings. In this contribution the new TEPX detector will be presented, with particular focus on the mechanics and thermal performance. A thorough overview of the TEPX design will be presented, including the validation of the serial powering implementation for the pixel modules. Lightweight material, including prototype titanium cooling loops, ensure a low material budget. Mechanical design, together with prototypes will be discussed. The effect of the material choice for various cooling pipes and disk support structures using finite element methods, connecting the modules with the CO2 coolant, is also discussed.
After successful operation of the Precision Proton Spectrometer (PPS) since 2016, the CMS Collaboration has published an Expression of Interest to pursue the study of central exclusive production (CEP) events, pp --> pXp, at the High-Luminosity LHC (HL-LHC) with detection of the very forward protons. This talk will present the desired performance and the physics perspectives of a CMS near-beam proton spectrometer at HL-LHC.
As there are no known astrophysical sources of cosmic ray (CR) antiprotons, they represent a good channel for indirect dark matter search. The secondary antiproton background is produced in collisions between primary CRs and the interstellar medium (spallation). In the last decade, thanks to high precision measurements by AMS-02 and PAMELA, a possible tension between the observed antiproton flux and different predictive models has been highlighted, between 1 and 500 GeV in the antiproton kinetic energy.
The large uncertainties which afflict antiproton flux predictions do not allow us to confirm the presence of an exotic signal, deserving further investigations. In the 10รท100 GeV range, the dominant uncertainties are the production cross section ones: the pp, p-He and He-p channels are responsible for almost all the cosmic antiprotons. In 2017 the NA61/SHINE experiment at SPS collected new data for pp collisions which were useful to study this discrepancy. In 2018 the SMOG experiment at LHCb made the very first p-He channel measurements. Additional p-He collisions data, with center-of-mass energies lower than the LHC ones, are still needed to reduce the cross section uncertainties for astroparticle physics. For this purpose, the COMPASS++/AMBER experiment will help us with incoming data on pp and p-He collisions. The state-of-the-art of the cosmic antiproton puzzle is presented, along with antiproton flux predictions using GALPROP and future perspective.
Innovative experimental techniques are needed to further search for dark matter weakly interacting massive particles. The ultimate limit is represented by the ability to efficiently reconstruct and identify nuclear and electronic recoil events at the experimental energy threshold. Gaseous Time Projection Chambers (TPC) with optical readout are very promising candidates thanks to the 3D event reconstruction capability of the TPC technique and the high sensitivity and granularity of last generation scientific light sensors. The Cygno experiment is pursuing this technique by developing a TPC operated with He-CF4 gas mixture at atmospheric pressure equipped with a Gas Electron Multipliers (GEM) amplification stage where visible light is produced. The combined use of high-granularity sCMOS cameras and fast light sensors allows the reconstruction of the 3D direction of the tracks, offering good energy resolution and very high sensitivity in the few keV energy range, together with a very good particle identification useful for distinguishing nuclear recoils from electronic recoils. We present the design and the sensitivity of a demonstrator which is currently being installed underground at LNGS and will be operated already in 2022. The performances of the demonstrator are evaluated with advanced Monte Carlo simulation of the radioactivity of the materials and the LNGS cavern background together with calibrations against radioactive sources. We show that good energy and spatial resolution as well as discriminating power between nuclear and electronic recoils is achieved in the KeV energy range. The Cygno collaboration plans to demonstrate the scalability of such detector concepts to reach a target mass large enough to significantly extend our knowledge about DM nature and solar neutrinos.
The search for lepton creation and Majorana neutrinos with double-beta decays is about to enter
a new era. Several ton-scale experiments are in preparation to explore the full parameter space
allowed by theories predicting inverted-ordered neutrino masses. In this paper, we evaluate the
discovery probability of a combined analysis of such a multi-experiment endeavor assuming the
complementary scenario, in which neutrino masses are normally ordered. The discovery probability
strongly depends on the mass value of the lightest neutrino, ranging from 0 probability in case of
vanishing lightest neutrino masses, up to 87% for mass values just beyond the current constraints.
We study discovery probability for a selection of priors on the lightest neutrino mass, including ex-
citing possible future scenarios in which cosmological surveys measure the sum of neutrino masses.
Uncertainties of nuclear calculations which influence all experiments are also evaluated and found to
partially compensate each other when data from different isotopes are available. Although discovery
is far from being granted, the theoretical motivations for these searches and the presence of sce-
narios with high discovery probability strongly motivate the proposed international, multi-isotope
experimental enterprise.
The polarized structure functions of 3He and 3H nucleuses are calculated in NLO approximation, considering and disregarding the light sea quark symmetry breaking. We employ the polarized structure function of the nucleons within the nucleus extracted from our two recent analysis on polarized DIS data and on polarized DIS+SIDIS data. Since the data of the second analysis cover a bigger range of Bjorken variable, both sU(2) and SU(3) symmetry breaking is considered within the analysis. Then we calculate and compare the polarized structures of nucleuses extracted from both scenarios. Also the Bjorken and ELT sum rule is calculated using the moments of structure functions. Finally, it is observed that most of the results of the phenomenological model with symmetry
breaking are better compatible with the experimental results and predictions.
The rates at which b- and c-quarks hadronize into different hadron species (i.e. the HF production fractions) may vary among MC Shower simulations such as Pythia, Sherpa, and Herwig. Furthermore, the flavor tagging efficiencies in ATLAS have been found to depend on the hadron species inside a jet. For example, flavor tagging efficiency for c-jets is the largest for D+ mesons and the lowest for charm baryons. Because of this, flavor tagging efficiency in MC depends on the MC shower software and needs to be corrected on an individual basis. The ATLAS Collaboration developed a method of reweighting the HF production fractions to a common world average, which largely eliminates the difference in the flavor tagging efficiency between different MC samples. Moreover, the experimental uncertainties in the HF production fractions (typically 2-3% relative uncertainty) can also be applied with the same reweighting procedure which gives rise to a common way of estimating these systematic uncertainties in ATLAS.
We employ machine learning techniques to identify important features that distinguish jets produced in heavy-ion collisions from jets produced in proton-proton collisions [1]. We formulate the problem using binary classification and focus on leveraging machine learning in ways that inform theoretical calculations of jet modification: (i) we quantify the information content in terms of Infrared Collinear (IRC)-safety and in terms of hard vs. soft emissions, (ii) we identify optimally discriminating observables that are analytically tractable, and (iii) we assess the information loss due to the heavy-ion underlying event and background subtraction algorithms. We illustrate our methodology using Monte Carlo event generators, where we find that important information about jet quenching is contained not only in hard splittings but also in soft emissions and IRC-unsafe physics inside the jet. This information appears to be significantly reduced by the presence of the underlying event. We discuss the implications of this for the prospect of using jet quenching to extract properties of the QGP. Since the training labels are exactly known, this methodology can be used directly on experimental data without reliance on modeling. We outline a proposal for how such an experimental analysis can be carried out, and how it can guide future measurements.
[1] Y. Lai, J. Mulligan, M. Pลoskoล, F. Ringer, arXiv 2111.14589 [hep-ph], submitted to JHEP
The `Laser-hybrid Accelerator for Radiobiological Applicationsโ, LhARA, is conceived as a uniquely flexible internatonal facility dedicated to the study of a completely new regime of radiobiol- ogy. The ambition of the multidisciplinary collaboration is that the technologies demonstrated in LhARA will be transformative in the delivery of ion beam therapy.
The laser-hybrid approach offers enormous potential by providing a more flexible, compact, and cost-effective high energy particle source while evading the space-charge limitations of current sources. LhARA uses a high-power laser to generate an ultrashort burst of protons or light ions from a target. These are captured using strong-focusing electron-plasma (Gabor) lenses at energies up to 15 MeV, enabling ultra-high instantaneous dose rates of up to $10^9$ Gy/s in pulses as short as 10โ40 ns. Further acceleration up to 127 MeV is facilitated by a fixed-field alternating-gradient accelerator designed to accommodate the source flexibility. Measuring the extremely high flux, low energy proton and ion beams at LhARA presents significant challenges. Novel techniques such as beam-gas curtain profile monitors and ion-acoustic dose-profile monitors are being developed for use in proof-of-principle systems. The status of the LhARA project in the context of the Ion Therapy Research Facility recently proposed to UKRI will be described along with the LhARA collaborationโs vision for the development of a transformative proton- and ion-beam system.
The Mu2e experiment at the Fermilab will search for a charged-lepton flavor violating neutrino-less conversion of a muon into an electron in the field of an aluminum nucleus, with a sensitivity improvement by a factor of 10,000 over existing limits.
The Mu2e Trigger and Data Acquisition System (TDAQ) uses \emph{otsdaq} framework as the online Data Acquisition System (DAQ) solution.
Developed at Fermilab, \emph{otsdaq} integrates several framework components - an \emph{artdaq}-based DAQ, an \emph{art}-based event processing, and an EPICS-based detector control system (DCS), and provides a uniform multi-user interface to its components through a web browser.
Data streams from the Mu2e tracker and calorimeter are handled by the \emph{artdaq}-based DAQ and processed by a one-level software trigger implemented within the \emph{art} framework.
Events accepted by the trigger have their data combined, post-trigger, with the separately read out data from the Mu2e Cosmic Ray Veto system.
Mu2e's DCS is based on EPICS, an experimental industrial physics and control system. It is an open source platform for monitoring, control, alarms, and archiving.
A prototype of the TDAQ and the DCS systems has been built and tested over the last three years at Fermilab's Feynman Computing Center, and now the production system installation is underway. The talk will present their status and focus on the installation plans and procedures for racks, workstations, network switches, gateway computers,
DAQ hardware, slow controls implementation, and testing. It will also discuss the
network design and cabling, quality assurance plans and procedures for the trigger farm computers, and the system and software maintenance plans.
The long-standing tension between the Standard Model prediction and the measured value of the muon anomalous magnetic moment can be addressed by new physics in the TeV range. Simplified models provide a way of understanding concretely how the discrepancy is tackled, and make it possible to predict other observables corellated with the muon g-2. In this talk I will explore the predictions which are testable at a future high-energy muon collider, identifying some crucial processes which are unique signatures of these models.
The region of high baryonic densities ($\mu_{B}$) of the QCD phase diagram is the object of several studies, focused on the investigation of the order of the phase transition and the search for the critical point. The rare probes, which include electromagnetic observables and heavy quark production and which are experimentally challenging to access as they require large integrated luminosities could be studied with fixed-target experiment. A future experiment, NA60+ at CERN, is being proposed to access this region and perform accurate measurements of the di-muon spectrum from threshold up to the charmonium region, as well as a study of charm and strange hadrons. The CERN SPS can cover, with large beam intensity, the collision energy region 5 < $\sqrt{s}$ < 17 GeV, which was scarcely studied until now with rare observables. The proposed experiment includes a muon spectrometer, based on tracking gas detectors (GEM, MWPC) coupled to a vertex spectrometer based on Si detectors (MAPS). The time slot after the Long Shutdown 3 of the LHC (>2027) is foreseen for the first data taking, with Pb and proton beams.
In this contribution we will review the project and the recent R&D, including the technical aspects as well as the studies of the physics performances for the observables.
The Large Hadron Collider (LHC) is the worldโs highest energy particle accelerator,
providing ultimately unique opportunities of directly searching for new physics Beyond
the Standard Model (BSM). Massive long-lived particles (LLPs), which are absent in
the Standard Model, can occur in many well-motivated theories of physics BSM. These
new massive LLPs can decay into other particles away from the LHC collision point,
resulting in unusual experimental signatures and hence requiring customized and
complex experimental techniques to identify them. Previously, the ATLAS experiment
did not have dedicated triggers to explicitly identify massive LLPs decaying in the inner
tracking detectors using tracking information. To enhance the sensitivity of searches, a
series of new triggers customized for various unconventional tracking signatures, such
as "displaced" tracks and short tracks which "disappear" within the tracking detector,
have been developed and will be utilized in the upcoming Run-3 data taking starting
from 2022. The development of these triggers and their expected performance will be
presented
The ATLAS trigger system includes a Level-1 (L1) trigger based on custom electronics and firmware, and a high level trigger based on off-the-shelf hardware and processing software. The L1 trigger system uses information from the calorimeters and from the muon trigger detectors, consisting of Resistive Plate Chambers in the barrel, and of Thin-Gap Chambers, small-strip Thin-Gap Chambers and MicroMegas in the endcaps. Once information from all muon trigger sectors has been received, trigger candidate multiplicities are calculated by the Muon-to-Central Trigger-Processor Interface (MUCTPI). In the next stage, muon multiplicity information is sent to the Central-Trigger Processor (CTP) and trigger objects are sent to the topological trigger. The CTP combines the information received from the MUCTPI with the trigger information from the calorimeters and the topological trigger, and takes the L1 trigger decision. As part of the upgrade of the ATLAS L1 trigger system for Run-3 of the Large Hadron Collider (LHC), a new MUCTPI has been designed and commissioned. The upgrade includes a replacement of 18 VME boards by a single ATCA board based on three high-end FPGAs and one System-on-Chip (SoC), with the sector-logic input data received on 208 optical links. Two FPGAs are used as Muon Sector Processors (MSPs), one MSP for each side of the ATLAS detector, and one FPGA used as Trigger and Readout Processor (TRP). The MSPs receive trigger information from the 208 muon trigger sectors, conduct overlap handling to flag/remove duplicate muon candidates, calculate the transverse momentum threshold multiplicities and send trigger objects to the topological trigger system. The TRP combines the trigger information, and sends trigger multiplicities to the CTP and trigger data to the Data Acquisition (DAQ) system. The DAQ Run-Control software for configuration, control and monitoring of the MUCTPI runs directly on the SoC. We discuss the commissioning and integration of the new MUCTPI used in ATLAS from the beginning of Run-3. In particular, we describe monitoring tools which have been developed for the commissioning and operation of the new MUCTPI, and challenges which had to be overcome to integrate the system in the experiment. Furthermore, we report the performance of the MUCTPI at the beginning of Run-3 of the LHC.
The proton-proton collision rate at the High Luminosity LHC will impose significant challenges on the data acquisition system used to read out the CMS Muon Cathode-Strip Chambers (CSCs). These chambers are located in the endcap regions of the CMS detector, and those closest to the beam line encounter a particularly high particle flux. To address these issues, a major upgrade of the electronics used in the CSC system has been undertaken. A key part of this upgrade is the development of new Optical Data-acquistion MotherBoards (ODMBs), which collect both the anode-wire and cathode-strip data. The ODMBs feature powerful Xilinx Field Programmable Gate arrays and include interfaces with high-speed optical transceivers operating at up to 12.5 Gb/s. The requirements, design, implementation, and testing of the ODMBs will be discussed, and the performance of prototype boards will be presented.
The CMS experiment has greatly benefited from the utilization of the particle-flow (PF) algorithm for the offline reconstruction of the data. The Phase II upgrade of the CMS detector for the High Luminosity upgrade of the LHC (HL-LHC) includes the introduction of tracking in the Level-1 trigger, thus offering the possibility of developing a simplified PF algorithm in the Level-1 trigger. We present the logic of the algorithm, along with its inputs and its firmware implementation. We show that this implementation is capable of operating under the limited timing and processing resources available in the Level-1 trigger environment. The expected performance and physics implications of such an algorithm are shown using Monte Carlo samples with hฮนgh pile-up, simulating the harsh conditions of the HL-LHC. New calorimeter features allow for better performance under high pileup (PU) to be achieved, provided that careful tuning and selection of the prompt clusters has been made. Additionally, advanced pile-up techniques are needed to preserve the physics performance in the high-intensity environment. We present a method that combines all information yielding PF candidates and performs Pile-Up Per Particle Identification (PUPPI) capable of running in the low latency level-1 trigger environment. Demonstration of the algorithm on dedicated hardware relying on ATCA platform is presented.
The Pixel Luminosity Telescope is a silicon pixel detector dedicated to luminosity measurement at the CMS experiment. It consists of 48 silicon sensor planes arranged into 16 "telescopes" of three planes each, with eight telescopes arranged around the beam pipe at either end of the CMS detector, outside the pixel endcap at a distance of approximately 1.75 m from the interaction point. The planes in a telescope are positioned such that a particle coming from the interaction point passing through a telescope will produce a hit in each of the three planes of the telescope. The instantaneous luminosity is measured from this rate of triple coincidences, using a special "fast-or" readout at the full bunch-crossing rate of 40 MHz, allowing for real-time, high-precision luminosity information to be provided to CMS and the LHC. The full pixel information, including hit position and charge, is read out at a lower rate and can be used for studies of systematic effects in the measurement. We present the commissioning, calibration, operational history, and performance of the detector during Run 2 (2015-2018) of the LHC, together with lessons learned for future projects. The detector has been rebuilt for LHC Run 3, with one of the telescopes using prototype CMS phase-2 n-in-p sensors of 150 ฮผm thickness. First performance results of the new detector will also be shown.
Euclid is a European Space Agency mission on satellite, whose aim is to investigate the so-called โdark universeโ (dark matter and dark energy) and strongly constrain the main cosmological parameters. In order to satisfy the scientific mission requirements, an extensive calibration procedure must be performed both on the ground and in flight.
The same source in the sky can be recorded with a different count-rate depending on its position within the Focal Plane (FP) because of a non uniform transmission of the light introduced by the telescope optics. Since this distortion can vary during the mission time, due to the variation of the environmental space conditions and outgassing, a monthly in-flight Self-Calibration will be required. This procedure allows reconstructing the illumination variation through multiple observations of the same sources in different positions of the FP.
We present a method for the selection of the optimal telescope pointing pattern for the Euclid Self-Calibration, the key point being the proper sampling of the spatial scales of interest (above a hundred pixels) that can be quantified looking at the distributions of the source records on the FP.
Lithium chloride water solution is a good option for solar neutrino detection. The $\nu_e$ charged-current (CC) interaction cross-section on $\rm{{}^{7}Li}$ is evaluated with new B(GT) experimental measurements. The total CC interaction cross-section weighted by the solar $^8$B electron neutrino spectrum is $3.759\times10^{-42}~\rm{cm}^2$, which is about 60 times that of the neutrino-electron elastic scattering process. The final state effective kinetic energy after the CC interaction on $\rm{{}^{7}Li}$ directly reflects the neutrino energy, which stands in sharp contrast to the plateau structure of recoil electrons of the elastic scattering. With the high solubility of LiCl of 74.5 g/100 g water at 10$^\circ$C and the high natural abundance of 92.41%, the molarity of $\rm{{}^{7}Li}$ in water can reach 11 mol/L for safe operation at room temperature. The CC event rate of $\nu_e$ on $\rm{{}^{7}Li}$ in the LiCl water solution is comparable to that of neutrino-electron elastic scattering. In addition, the $\nu_e$ CC interaction with the contained $\rm{{}^{37}Cl}$ also contributes a few percent of the total CC event rate. The contained $\rm{{}^{35}Cl}$ and $\rm{{}^{6}Li}$ also make a delay-coincidence detection for electron antineutrinos possible. The recrystallization method is found to be applicable for LiCl sample purification. The measured attenuation length of $11\pm1$ m at 430 nm shows that the LiCl solution is practicable for a 10-m diameter detector for solar neutrino detection. Clear advantages are found in studying the upturn effect of solar neutrino oscillation, light sterile neutrinos, and Earth matter effect. The sensitivities in discovering solar neutrino upturn and light sterile neutrinos are shown. The full research article can be seen at arXiv:2203.01860.
The SABRE (Sodium iodide with Active Background REjection) experiments aim to detect an annual rate modulation from dark matter interactions in ultra-high purity NaI(Tl) crystals in order to provide a model independent test of the signal observed by DAMA/LIBRA. The SABRE South experiment is located at the Stawell Underground Physics Laboratory (SUPL), Australia, and is partnered with SABRE North at the Laboratori Nazionali del Gran Sasso (LNGS). SUPL is the first deep underground laboratory in the Southern Hemisphere and is due to be ready for use by mid-2022.
SABRE South is designed to disentangle seasonal or site-related effects from the dark matter-like modulated signal by using an active veto and muon detection system. Ultra-high purity NaI(Tl) crystals are immersed in a linear alkyl benzene (LAB) based liquid scintillator veto, further surrounded by passive steel and polyethylene shielding and a plastic scintillator muon veto. โWork has been undertaken to understand and mitigate the leading background processes, and to understand the performance of both the crystal and veto systems. In this talk we will present the final experiment design and results of our full GEANT4 based background simulation model. SABRE South has an expected background of < 0.7 cpd/kg/keV allowing us to probe a DAMA/LIBRA like signature with 3 ฯ sensitivity within around 2 years of data taking. We will also discuss the overall experiment construction status and the characterisation of the key detector components, including our photomultiplier tubes, NaI(Tl) crystals, LAB liquid veto, and muon system.
Finally, We will report on the design of SUPL and its science program for the near future. SABRE South is the first major experiment in SUPL, with assembly to commence soon after the handover.
DUNE is a next-generation of long baseline experiment for neutrino oscillation physics. The near detector (ND) complex aims at constraining the systematic uncertainties to ensure high precision measurements of neutrino oscillation parameters. The SAND apparatus is one of the three components of the ND permanently located on-axis to monitor the neutrino beam stability and measure its flux. SAND exploits a 0.6 T superconducting magnet coupled with an electromagnetic calorimeter made of lead scintillating fibers. The inner magnetize volume is provided with a novel LAr detector and a low-density Straw Tube Tracker (STT). In this poster the major components of SAND and their role in the measurements of (anti)neutrinos interactions are presented.
The REINFORCE EU project (Research Infrastructures FOR Citizens in Europe) engages and supports citizens to cooperate with researchers and actively contribute to the development of new knowledge for the needs of science and society.
REINFORCE offers four โdiscovery demonstratorsโ in different areas of physics. The infrastructure of all demonstrators is based on Zooniverse, the most popular citizen-science platform.
We will present the demonstrator titled โSearch for new particles at CERNโ, which engages citizen-scientists in searches for new long-lived particles produced by the high-energy proton-proton collisions at the LHC of CERN and registered by the ATLAS experiment. To make this possible, the demonstrator adopts a three-stage architecture. The first two stages use simulated data to train citizens, but also to allow for a quantitative assessment of their performance and a comparison with machine learning algorithms. The third stage, on the other hand, uses real data from the ATLAS Open-Data subset, providing two research paths: (a) study of Higgs boson decays to two photons, one of which could be converted to an electron-positron pair by interaction with detector material and (b) search for yet undiscovered long-lived particles, predicted by certain theories Beyond-the-Standard-Model. At the end of the project the collected citizensโ results will be assessed and analyzed.
Since the launch of the demonstrator on Zooniverse, it has reached a large number of volunteers, and has motivated them to play a part in frontier scientific research. Up to now more than 150,000 classifications have been registered in all three stages together.
During the last 15 years the "Radio MontecarLow (โRadiative Corrections and Monte Carlo Generators for Low Energiesโ) Working Group, see www.lnf.infn.it/wg/sighad/, has been providing valuable support to the development of radiative corrections and Monte Carlo generators for low energy e+e- data and tau-lepton decays. Its operation which started in 2006 proceeded until the last few years bringing together at 20 meetings both theorists and experimentalists, experts working in the field of e+e- physics and partly also the tau community and produced the report
โQuest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental dataโ S. Actis et al. Eur. Phys. J. C 66, 585-686 (2010) (https://arxiv.org/abs/0912.0749), which has more than 300 citations.
While the working group has been operating for more than 15 years without a formal basis for funding, parts of our program have recently been included as a Joint Research Initiative in the group application of the European hadron physics community, STRONG2020, to the European Union, with a more specific goal of creating an annotated database for low-energy hadronic cross sections in e+e- collisions. The database will contain information about the reliability of the data sets, their systematic errors, and the treatment of RC.
All these efforts have been recently revitalized by the new high-precision measurement of the anomalous magnetic moment of the muon at Fermilab, which, when combined with the final result from the Brookhaven experiment, shows a 4.2ฯ discrepancy with respect to the state-of-the-art theoretical prediction from the Standard Model, including an evaluation of the leading-order hadronic-vacuum-polarization contribution from e+eโ โ hadrons cross-section data.
We will report on these Radio MonteCarLow and Strong2020 activities.
The Jiangmen Underground Neutrino Observatory (JUNO) is a new generation of reactor based experiments located in the Guangdong province in China. This experiment offers a rich physics program and will bring significant contributions in many neutrino areas, in particular concerning the determination of the neutrino mass ordering and the measurement of the oscillation parameters at the percent level.
The central detector consists of a sphere filled with 20 kilo-tons of liquid scintillator surrounded by about 17612 photomultipliers (20โโ) and 25600 small photomultipliers (3โโ) for reading the light produced by the event interactions. Even if the detector is located at 700 m depth in an underground laboratory, the remaining background imposes the use of a Veto System for its characterization and to insure an efficient event selection. In particular, the cosmogenic induced background due to the muons passing through the central detector represents the most dangerous contributions and needs to be precisely characterized. The Veto System is assigned to this task and consists of two subsystems, the Outer Veto (OV) and the Top Tracker (TT). The OV is a Water Cherenkov type detector surrounding the central detector and is equipped with 2400 large photomultipliers (20โโ) fixed on the support structure looking outward. The JUNO-TT uses the modules from the decommissioned OPERA experiment which are based on the well-known plastic scintillator technology equipped with wavelength shifting fibers. It will be placed on the top of the central detector for an efficient muon track reconstruction. In this poster, the status of the Veto System will be presented with some elements on the trigger strategy.
Sexaquarks are a hypothetical low mass, small radius uuddss dibaryon which has been proposed recently and especially as a candidate for Dark Matter [1,2]. The low mass region below 2 GeV escapes upper limits set from experiments which have searched for the unstable, higher mass H-dibaryon and did not find it [1].
Depending on its mass, such state may be absolutely stable or almost stable with decay rate of the order of the lifetime of the Universe therefore making it a possible Dark Matter candidate [2].
Even though not everyone agrees [3] its possible cosmological implications as DM candidate cannot be excluded and it has been recently searched in the BaBar experiment [4].
The assumption of a light Sexaquark has been shown to be consistent with observations of neutron stars [5] and the Bose Einstein Condensate of light Sexaquarks has been discussed as a mechanism that could induce quark deconfindement in the core of neutron stars [6].
S production in heavy ion collisions is expected to be much more favorable than in the only experimental search to date, $\Upsilon \rightarrow S \overline{\Lambda} \overline{\Lambda} $ [4], which is severely suppressed by requiring a low multiplicity exclusive final state [1]. By contrast, parton coalescence and/or thermal production give much larger rates in heavy ion collisions [1,8].
We use a model which has very successfully described hadron and nuclei production in nucleus-nucleus collisions at the LHC [7], in order to estimate the thermal production rate of Sexaquarks with characteristics such as discussed previously rendering them DM candidates.
We show new results on the variation of the Sexaquark production rates with mass, radius and temperature and chemical potentials assumed and their ratio to hadrons and nuclei and discuss the consequences.
These estimates are important for future experimental searches and enrich theoretical estimates in the multiquark sector.
[1] G. R. Farrar, (2017), arXiv:1708.08951 [hep-ph] and G. R. Farrar, (2022), arXiv:2201.01334 [hep-ph] [2] G. R. Farrar, (2018), arXiv:1805.03723 [hep-ph] and G.R.Farrar,X.Xu and Z. Wang, (2020),arXiv:2007.10378 [hep-ph].
[3] E. Kolb, M. Turner,Phys.Rev. D99 (2019) no.6, 063519
[4] BABAR Coll. J. P. Lees et al, Phys.Rev.Lett. 122 (2019) no.7, 072002
[5] D. Blaschke et al., e-Print: 2202.00652 [nucl-th]
[6] D. Blaschke et al., e-Print: 2202.05061 [nucl-th]
[7] K. A. Bugaev et al, Nucl.Phys. A970 (2018) 133-155 and references therein.
[8] D. Blaschke et al, Int. J. Mod. Phys. A 36 (2021) 25, 2141005, arXiv:2111.03770
Precision results on cosmic-ray electrons are presented in the energy range from 0.5 GeV to 2.0 TeV based on 50 million electrons collected by the Alpha Magnetic Spectrometer on the International Space Station. In the entire energy range the electron and positron spectra have distinctly different magnitudes and energy dependences. At medium energies, the electron flux exhibits a significant excess starting from 49.5 GeV compared to the lower energy trends, but the nature of this excess is different from the positron flux excess above 24.2 GeV. At high energies, our data show that the electron spectrum can be best described by the sum of two power law components and a positron source term. This is the first indication of the existence of identical charge symmetric source term both in the positron and in the electron spectra and, as a consequence, the existence of new physics
Transverse momentum ($p_{T}$) spectra of charged hadrons at mid-pseudorapidity in deformed Xe-Xe collisions at 5.44 Tev center-of-mass energy under the Monte Carlo HYDJET++ model (HYDrodynamics plus JETs) framework is reported. 0.15$
Event shape observables such as transverse spherocity($S_{0}$) have evolved as a
powerful tool to separate soft and hard contributions in an event in small collision
systems. To understand this phenomenon, we used two-particle differential-number
correlation functions, $R_{2}$, and transverse momentum correlation functions, $P_{2}$, of
charged particles produced in pp collisions at the LHC center-of-mass energy
$\sqrt{\textit{s}}$ = 7 TeV with the PYTHIA model. The $\Delta\varphi$-dependance of these correlation
functions in different multiplicity and $S_{0}$ classes are discussed . We find that these
correlation functions exhibit different shapes and sizes in both near-side(NS)
and away-side(AS) with multiplicity and $S_{0}$ classes. We see a strong correlation
in the NS and AS of these correlation functions for low-$S_{0}$(jetty-like),
which become weaker for high-$S_{0}$(isotropic). In addition, mean-$\textit{p}_{\rm T}$ of charged
particles for low-$S_{0}$, high-$S_{0}$ and $S_{0}$-integrated are discussed. Finally, it was
observed that $S_{0}$ should be a good observable as compared to multiplicity to
disentangle jetty and isotropic events in a small collision system.
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the Standard Model as well as searches for new physics beyond the standard model. The Compact Muon Solenoid (CMS) experiment is planning to replace entirely its trigger and data acquisition system to achieve this ambitious physics program. Efficiently collecting those datasets will be a challenging task, given the harsh environment of 200 proton-proton interactions per LHC bunch crossing. The new Level-1 trigger architecture for HL-LHC will improve performance with respect to Phase I through the addition of tracking information and subdetector upgrades leading to higher granularity and precision timing information. In this poster, we present a large panel of trigger algorithms for the upgraded Phase II trigger system, which benefit from the finer information to reconstruct optimally the physics objects. Dedicated pile-up mitigation techniques are implemented for lepton isolation, particle jets and missing transverse energy to keep the rate under control. The expected performance of the new trigger algorithms will be presented, based on simulated collision data of the HL-LHC. The selection techniques used to trigger efficiently on benchmark analyses will be presented, along with the strategies employed to guarantee efficient triggering for new resonances and other new physics signals.
The identification of jets containing b-hadrons, b-tagging, is critical for many ATLAS physics analyses. Its performance is measured in data and the simulation is corrected through simulation-to-data scale factors. However, such measurement only covers a certain jet pT range, so the b-tagging performance at higher pT must be evaluated via a simulation-based extrapolation method. This work considers a widely used scheme, the "pseudo-continuous" working points, which constitutes a flexible way to apply a set of different b-tagging requirements within the same ATLAS physics analysis. A brief introduction is given to this scheme and the corresponding simulation-based extrapolation to high pT jets is presented for the first time. In addition, a new statistical tool, denoted as "eigenvector recomposition", is developed to allow for the correct combination of analyses relying on different b-tagging setups. It correlates common systematic uncertainties related with b-tagging in a mathematically solid way. Its application in the combination of the "boosted" and "resolved" VH(H->bb) analyses is shown as an example.
My research shows that production, collisions, and decays of matter in space result in the forming of HE particle spectra, which are measured in cosmic ray physics and astrophysics. If we understand how a HE proton produces protons in the collision with another proton (or antiproton), it becomes possible to predict the form of the various particle spectra in astrophysics. LHC experiments can provide us with the proton spectra at the very high energy (VHE) of collision. A phenomenological study of previous years gave us the Quark-Gluon String Model for the modeling of baryon and meson production spectra in full kinematical range from centrally produced hadrons up to very forward ones. The method is only to convert the primary proton spectrum into the laboratory system and to compare it with the spectra of various CR particles. I have shown that spectra of neutrino and cosmic protons reproduce the form of proton production spectrum at the single collision of the initial proton of ultra-high energy (UHE). The gamma spectrum from SN1987 has no such specifics because it is influenced by the spectrum of low energy $\pi^0$ mesons. Nevertheless, the bump in the gamma spectrum exists in the measurements for PeV energies. In such a way, enhancement in the flows of gamma, neutrino, and cosmic ray protons at the end of spectra is the signature of UHE proton-proton collision.
Axions, hypothetical particles associated with the spontaneous breaking of a postulated U(1) symmetry, offer a dynamic solution to the strong CP problem, an important puzzle in the standard model (SM). Axions in the mass range of 1 ฮผeV - 10 meV are considered as favored candidates for dark matter. They have extremely weak interactions with the SM fields, making relevant searches exceptionally difficult. To date, the cavity haloscope has remained the most sensitive approach in this mass range. Relying on the two-photon coupling, it utilizes a frequency-tunable microwave cavity immersed in a strong magnetic field. The Center for Axion and Precision Physics Research recently implemented a flux-driven Josephson parametric amplifier in the receiver chain of an axion haloscope, reducing the system's noise down to 200 mK. We present the results of the axion dark matter search for the mass range of 9.39 - 9.51 ฮผeV. We also discuss a newly developed scanning method that can improve the scan speed by approximately 30%.
We perform a sensitivity study of an unbinned angular analysis of the $B\to D^{*}\ell\nu_{\ell}$ decay, including the contributions from the right-handed (R.H.) vector current. We show that the Wilson coefficient of the R.H. vector current can be strongly constrained by measuring the normalized angular observables $\langle g_i\rangle$ ($i=1,2,...,11$) without the intervention of the $V_{cb}$ puzzle.
We update our analysis of D meson mixing including the latest experimental results.
We also derive constraints on absorptive and dispersive CP violation by combining
all available data, and discuss future projections. We also provide posterior
distributions for observable parameters appearing in D physics.
The experiment DANSS is located on a movable platform below 3.1 GW industrial reactor of Kalininskaya NPP. The detector is a solid state scintillator spectrometer collecting up to 5000 neutrino events per day with the only 2% background. The experiment is already running for 6 years and more than 6 million inverse beta-decay events are already collected. DANSS already explored a large portion of the possible parameter space of the sterile neutrino oscillations. No statistically significant signal found so far. The strongest limit is set around $\Delta m^2$ ~ 1 eV$^2$ with sin$^2 2\theta$ = 0.008.
The main drawback of the detector is a moderate energy resolution of 34% at 1 MeV. This limits its sensitivity especially in the region of larger $\Delta m^2$. The aim of the upgrade planned is to reach energy resolution of 12% at 1 MeV. We also plan to use SiPM only readout and increase the sensitive volume by 70% keeping the same passive shielding and the platform. The main idea of the upgrade is in a new design of the scintillator strips providing larger light output with much better uniformity. The strips will be read out from both edges which will allow to reconstruct all three coordinates even if only a single strip was hit.
The talk covers the detector design and expected sensitivity, as well as the beam test of the new strip prototypes with the pion beam of the PNPI synchrocyclotron. The new strips demonstrated more than twice higher light output together with fairly flat detector response uniformity. For the better time response of new strips we are going to use newer wavelength shifting (WLS) fiber YS-2 by Kuraray. A dedicated study of this fiber with a 360 nm picosecond laser pulses demonstrated nearly twice shorter decay time (4.0 ns) compare to a mature Y-11 fiber. Light output of YS-2 and Y-11 fibers was also compared using $^{90}$Sr source and cosmic rays and the new fiber turned out to be at least as good as the mature one. The timescale of the upgrade is planned for two years. Mass production of the new strips has already nearly finished and we are doing the last tests before the start of their assembly with fibers and SiPM.
The High-Luminosity Large Hadron Collider (HL-LHC) project aims to boost the performance of the LHC, augmenting the potential for discoveries and the accuracy of SM measurements. From the LHC Run-4 onwards, the upgrade aims at increasing the instantaneous luminosity of the machine, to target an overall ten-fold increase of the collected dataset compared to the LHC initial design. In order to withstand the expected increase of both integrated doses and event rates, the on-board electronics hosting the first level of readout and trigger of the CMS DT Chambers will be replaced with the new On-Board electronics for DT (OBDT). Time digitization (TDC) data from the OBDTs will be streamed directly to an upgraded backend system that, relying on the latest commercial FPGAs, will perform event building and generate trigger primitives (TP) exploiting the ultimate DT cell resolution. Additionally, the Detector Safety System (DSS) will also undergo a redesign, which entails the development of a new hardware, called MONitor for SAfety (Monsa) system. To demonstrate the Phase-2 architecture, a readout based on early OBDT prototypes was deployed in parallel to the legacy electronics through front-end splitting on a full DT sector. Data from the OBDTs is streamed into proxy backend boards, where the trigger primitive generation algorithm designed for the upgrade is run. In parallel, close-to final prototypes of the OBDTs were assembled and test under radiation. In this report, the motivation for such an upgrade will be highlighted and the status of the development and testing of the DT Phase-2 electronics at large will be discussed. Moreover, the most up to date performance results from the DT slice-test operation will be presented, as well as the plans to augment the demonstrator using final OBDT prototypes.
We study the tensor mesons (J=2,3) within the low energy effective model approach of QCD. Effective model is based on the approximate symmetries of QCD Lagrangian. Results for the tree level hadronic and radiative decay rates of the tensor mesons are presented. Experimentally well-establishment of the tensor mesons enables us to compare our theoretical results to the ones in Particle Data Group (PDG). We furthermore present comparison to the Lattice QCD calculations.
We study the allowed parameter space of the scalar sector in the superweak extension of the standard model (SWSM). The allowed region is defined by the conditions of (i) stability of the vacuum and (ii) perturbativity up to the Planck scale, (iii) the pole mass of the Higgs boson falls into its experimentally measured range. The analysis uses two-loop renormalization group equations and quantum corrections to the parameters at two-loop accuracy. A well-defined region is found either if the mass of the new scalar $M_s$ is larger or smaller than the Higgs boson mass. We study the dependence the allowed parameter space on the size of the sterile neutrino Yukawa coupling $y_x$. Finally, we discuss the SWSM quantum corrections to the W boson mass and check its effect on constraining the parameter space. The talk is based on the work https://arxiv.org/abs/2204.07100
The primary goal of JUNO is to resolve the neutrino mass hierarchy using precision spectral measurements of reactor antineutrino oscillations. To achieve this goal a precise knowledge of the unoscillated reactor spectrum is required in order to constrain its fine structure. To account for this, Taishan Antineutrino Observatory (TAO), a ton-level, high energy resolution liquid scintillator detector with a baseline of about 30 m, is set up as a reference detector to JUNO. The 20% increase in the coverage of photosensors, the replacement of Photomultiplier Tubes (PMTs) with Silicon Photomultiplier (SiPM) tiles, the smaller dimension and the operating temperature at -50ยฐC, would enable TAO to achieve a yield of 4,500 p.e./MeV. Consequently TAO will achieve an energy resolution better than 2%/E(MeV).
The ability to accurately reconstruct reactor antineutrino events in TAO is of great importance for providing a model-independent reference spectrum for JUNO. Previous studies have proven that Deep Learning yields competitive reconstruction results. This work aims to demonstrate the general applicability of Graph Neural Network (GNN) for the vertex reconstruction. Owing to the spherical nature of the detector, GNN architecture is preferred since it eliminates the need of co-ordinate transformation. The dataset for model training and validation is generated by the Monte Carlo method with the official TAO offline software. The network is trained on the aggregated features that are obtained from the information collected by SiPMs. Preliminary results of reconstruction are presented in this poster.
The associated production of a Higgs and a Z boson at the LHC receives an important contribution from the gluon-initiated channel $gg \rightarrow ZH$. Currently, exact analytic results for the NLO QCD corrections to this partonic process are not known, due to the presence of top-quark-mediated two-loop box diagrams in the virtual contribution. The inclusion of the gluon-initiated component at NLO would reduce the theoretical uncertainties of the hadronic process $pp \rightarrow ZH$, which also affect the determination of the $H \rightarrow b \bar{b}$ decay.
In this poster I will present the calculation of the virtual QCD corrections to $gg \rightarrow ZH$ using an analytic approximation, based on the expansion of the amplitude in terms of a small transverse momentum of the final-state particles. This method provides an approximation of the virtual corrections with an accuracy below the percent level for center-of-mass energies up to โผ750 GeV, which contribute to โผ98% of the hadronic cross section at the LHC. I will also report on the recent combination of these results with the ones obtained from a complementary approach, which is based on the expansion of the amplitude in the high-energy limit. When the results of both expansions are improved using Padรฉ approximants, their combination provides accurate results over the whole phase space.
Large research infrastructures have opened new observational windows, allowing us to study the structure of matter up to the entire Universe. However, society hardly observes these developments through education and outreach activities. This induces a gap between frontier science and society that may create misconceptions about the content, context, and mission of public funded science. In this context, the main goal of the European Unionโs Horizon 2020 "Science with and for Society" REINFORCE project (REsearch INfrastructure FOR Citizens in Europe) is to minimize the knowledge gap between large research infrastructures and society through Citizen Science. A series of activities is being developed on the Zooniverse platform, in four main fields of frontier physics involving large research infrastructures: gravitational waves with the VIRGO interferometer, particle physics with the ATLAS detector at LHC, neutrinos with the KM3NeT telescope, and cosmic rays at the interface of geoscience and archeology. Using real and simulated data, Citizen Scientists will help building a better understanding of the impact of the environment on these very high precision detectors as well as creating new knowledge. This poster focuses on the Deep Sea Explorers demonstrator involving the KM3NeT neutrino telescope, in order to show practical examples of Citizen Science activities that are proposed through the project. Preliminary results of the work carried out with the help of the citizen scientists will also be presented.
To progress in their research, scientific communities generally rely on shared computing resources aggregated into clusters.
To provide fair use of the computing resources to every user, administrators of these clusters set up Local Resources Management Systems.
They orchestrate the scientific workloads and the resources by allowing a given workload to be executed for a certain time on a definite number of CPUs or machines.
To maximize the use of the computing resources and avoid running out of time, users may assess their environments by executing fast CPU benchmarking solutions such as DIRAC Benchmark.
Developed in Python 2 in 2012, DIRAC Benchmark has been mainly and successfully employed in the context of the LHCb experiment.
Now that Python 2 is deprecated, DIRAC Benchmark has to be ported to Python 3.
This paper describes this transition, the impact brought by the changes, and the considered solutions.
The main contributions of this paper are: (i) an extensive description of the CPU Benchmarking tools in High-Energy Physics; (ii) various methods to improve and maintain the program, such as unit tests, a Continuous Integration and Delivery pipeline; (iii) a comprehensive analysis of the discrepancies that could have caused 22\% of the workloads to fail; (iv) a review of the advantages and drawbacks of the considered solutions.
The problems were addressed by applying a constant factor on the scores depending on the underlying CPU model and Python version used.
The spectroscopy of higher lying charmonium states together with exotic mesons with masses above the 2mD open charm threshold has been full of surprises and remains poorly understood [1]. It is a good testing tool for the theories of strong interactions, including: QCD in both the perturba-tive and non-perturbative regimes, LQCD, potential models and phenomenological models. The ex-periments with antiproton-proton annihilation, proton-proton and proton-nuclei collisions are well suited for a comprehensive spectroscopy program, in particular, the spectroscopy of chamonuim and exotics states.
The currently most compelling theoretical descriptions of the mysterious XYZ mesons attribute them to hybrid structure with a tightly bound diquark [2] or tetraquark core [3 - 5] that strongly couples to S-wave molecular like structures. In this picture, the production of a XYZ states in high energy hadron collisions and its decays into light hadron plus charmonum final states proceed via the core component of the meson, while decays to pairs of open-charmed mesons proceed via the component.
These ideas have been applied with some success to the XYZ states [2], where a detailed calcula-tion finds a core component that is only above 5% of the time with the component (most-ly ) accounting for the rest. In this picture these states are compose of three rather disparate components: a small charmonium-like core with rrms < 1 fm, a larger component with rrms โ 1.5 fm and a dominant component with a huge, rrms โ 9 fm spatial extent.
In the hybrid scheme, XYZ mesons are produced in high energy proton-nuclei collisions via its compact (rrms < 1 fm) charmonium-like structure and this rapidity mixes in a time (t ~ ฤง/ฮดM) into a huge and fragile, mostly , molecular-like structure. ฮดM is the difference between the XYZ mass and that of the nearest mass pole core state, which we take to be that of the ฯc1(2P) pure charmonium state which is expected to lie about 20 ~ 30 MeV above MX(3872) [6, 7]. In this case, the mixing time, cฯmix 5 ~ 10 fm, is much shorter than the lifetime of X(3872) which is cฯX(3872) > 150 fm [8].
The near threshold production experiments in โs_pN~8GeV energy range with proton-proton and proton-nuclei collisions with โs_pN up to 26GeV and luminosity up to 10^32cm^-2^-1 planned at NICA may be well suited to test this picture for the X(3872) and other exotic XYZ mesons [9]. Their current experimental status together with hidden charm tetraquark candidates and present simu-lations what we might expect from A-dependence of XYZ mesons in proton-proton and proton-nuclei collisions are summarized.
References
[1] S. Olsen, Front. Phys. 10 101401 (2015)
[2] S. Takeuchi, K. Shimizu, M. Takizawa, Progr. Theor. Exp. Phys. 2015, 079203 (2015)
[3] A. Esposito, A. Pilloni, A.D. Poloza, arXiv:1603.07667[hep-ph]
[4] M.Y.Barabanov, A.S.Vodopyanov, S.L.Olsen, Phys. Atom. Nuc. 79, 1, 126 (2016)
[5] M. Barabanov, A. Vodopyanov, Study of Charmonium-Like Structure in Hadron and Heavy Ion Collisions, Physics of Atomic Nuclei, V. 84, N. 3, (2021) 373โ376.
[6] Isgur, Phys. Rev. D 32, 189 (1985)
[7] K. Olive et al. (PDG), Chin. Phys. C 38, 090001 (2014)
[8] The width of X(3872) is experimentally constrained to be ะ X(3872) < 1.2 (90% CL) in S.-K. Choi et al (Belle Collaboration), Phys. Rev. D 84, 052004 (2011)
[9] M. Barabanov, J. Segovia, C.D. Roberts, E. Santopinto et al., "Diquark correlations in hadron physics: origin, impact and evidence", Progress in Particle and Nuclear Physics 116 (2021) 103835
A search for the direct production of pairs of charginos, each decaying into a first neutralino (LSP) and a W boson which in turn decays leptonically, is presented. Previous LHC Run 2 analyses have already excluded with a 95% CL the existence of chargino and neutralino in regions where the difference between their masses is much greater than the W boson mass. The aim of the current search is to explore the so called
compressed regions, with a chargino-neutralino mass difference of the order of the W boson mass. The analysis strategy uses machine learning techniques to improve the signal from Standard Model background rejection. The analysis targets events with two leptons, missing transverse energy and no hadronic activity in the final state, and uses pp collision data at 13 TeV collected by the ATLAS experiment during Run 2 at LHC, corresponding to an integrated luminosity of 139/fb
We review the status of anomalous triple gauge couplings in the light of the recent $(g-2)_\mu$ measurement at FNAL, the new lattice theory result of $(g-2)_\mu$ and the updated measurements of several $B$-decay modes. In the framework of SMEFT, three bosonic dimension-6 operators are invoked to parametrize physics beyond the Standard Model and their contributions to such low-energy observables computed. Constraints on the corresponding Wilson coefficients are then derived from fits to the current experimental bounds on the observables and compared with the most stringent ones available from the 13 TeV LHC data in the $W^+ W^-$ and $W^\pm Z$ production channels.
The ATLAS Open Data project aims to deliver open-access resources for education and outreach in High Energy Physics using real data recorded by the ATLAS detector. The Open Data release so far has resulted in the release of a substantial amount of data from 8 TeV and 13 TeV collisions in an easily-accessible format and supported by dedicated software and documentation to allow its fruitful use by users at a range of experience levels. To maximise the value of the data, software, and documentation resources provided ATLAS has developed initiatives and promotes stakeholder engagement in the creation of these materials through on-site and remote training schemes such as high-school work experience and summer schools programs, university projects and PhDs qualification tasks. We present examples of how multiple training programs inside and outside CERN have helped and continue to help development the ATLAS Open Data project, lessons learnt, impacts, and future goals.
The study of event-by-event mean transverse momentum ($p_{\rm T}$) fluctuations is a useful tool to understand the dynamics of the system produced in ultrarelativistic heavy-ion collisions. The measurement of higher-order fluctuations of mean-$p_{\rm T}$ can help in probing the hydrodynamic behavior of the system and is considered to be a direct way of observing initial-state fluctuations. It can also be sensitive to the early-time evolution of the produced quark-gluon plasma. We present the first measurement of three- and four-particle $p_{\rm T}$ correlators and their intensive ratios, related to the skewness and kurtosis of event-by-event mean-$p_{\rm T}$ distribution, as a function of average charged-particle density in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV and Xe-Xe collisions at $\sqrt{s_{\rm NN}}$ = 5.44 TeV using the data recorded by the ALICE detector. For the baseline study, the analysis is performed also in pp collisions at $\sqrt{s}$ = 5.02 TeV. The measurements are compared to corresponding results from the STAR experiment at lower collision energies and to different theoretical model predictions.
A search for a charged scalars can provide clean, rare, and direct indications for New Physics (NP) beyond the standard model. Therefore, in view of the above, we investigate one of the most important channels in the 2HDM Type-I model, assuming h(H) to mimic the observed resonance โผ 125 Gev; we ponder the practicality of the associated charged Higgs production through the pp โ Hยฑ Wโ channel that could have further substantial challenges at the LHC experiments. In view of that, we perform an extensive parameter scan in the lower part of the scalar mass spectrum taking into account the latest theoretical and experimental constraints. Our study in this regard shows that the signal can reach the level several fb in the reasonable parameter space notably for the 2W + ฯฯ channel, which can be a clean signal for NP. Finally, we perform a detailed analysis of the significance as a function of luminosity.
In recent years, outreach activities have acquired great importance among the three university missions for the involvement of the non-expert community.
In this context, the โPhysics4Teenagersโ outreach group of the University of Pavia Physics Department, in northern Italy, designed the โPER me si va ne la fisica recenteโ experience.
In physics promotion, our major target is usually high school students with a particular focus on the choice of their future studies. As a matter of fact, for more than ten years, we have been organizing the โTenDaysPhysics4Teenagersโ summer school for an audience of about thirty teenagers from different cities. About 30% of those former attendees are now enrolled as students at our Department, confirming the effectiveness of our method.
With this in mind, we decided to exploit a new format: the educational escape room. Based on the success of recreational escape rooms, this format has acquired great visibility in the last decades, combining entertainment with learning goals. Besides, it allows for the development of soft skills such as collaboration and critical thinking through hands-on activities.
This experience was designed for the โFestival della Scienzaโ in Genova at the end of October 2021. The keyword of the edition was โmapsโ, thus we created a journey through the history of particle physics from the atomic theory of Democritus to the discovery of the Higgs boson, which completes the Standard Model. Furthermore, we pushed the boundaries of our map towards the questions that remain unsolved in this theory, such as the problem of dark matter, neutrino masses and oscillations, and unification of forces. The choice of the topic was driven by the fact that nuclear and particle physics has been recently introduced in the ministerial guidelines for high school teaching, and we strongly believe that such an activity could lead to interesting insights.
Moreover, since in 2021 the 700th anniversary of Danteโs death was celebrated, we shaped this journey like the one carried out in the โDivina Commediaโ. Democritus, performed by us, plays the role of Virgilio, guiding the audience through the most important steps of particle physics history in the first room. Then, unable to answer the questions left unsolved by the Standard Model, he gives way to a modern and scientific version of Beatrice, the curiosity, that in a second room tries to shed light on open problems.
All the puzzles proposed were crafted by hand, exploiting, when possible, recycled and inexpensive materials.
Among these, the following are worth mentioning: the reproduction of Rutherfordโs experiment, the analysis of cosmic rays particles behavior in a magnetic field and the theoretical hypothesis of the neutrino existence.
In the first one the users were asked to shoot little rubber bullets to a gold coloured tissue paper, tracking the same experimental results obtained by Rutherford and deducing the planetary model of the atom.
In the second one, the participants could play with a control panel changing parameters, such as mass and charge, to interactively simulate the trajectories of particles in a magnetic field. This allowed the explanation of muon, pion and positron discovery.
Finally, the users were guided through the theoretical prediction of the neutrinos by applying some simple conservation laws (charge, baryonic and leptonic number), just as in a typical escape room puzzle.
During the festival, we hosted around one thousand participants, both high school classes and groups of non-students of different ages.
The experience was then installed in the โLiceo Respighiโ high school in Piacenza for one week in March 2022, reaching around 400 students. In next months, it will be proposed at the "Liceo Copernico" of Pavia and in the frame of the European Researchers Night program.
At the end of the escape room, we asked the participants for feedback and suggestions through a satisfaction questionnaire. The good results in both occasions confirm the suitability of the format and the effectiveness of the friendly and informal attitude.
With this contribution we will discuss details of the activities and results, mentioning future installations and possible improvements.
Non-standard neutrino properties can modify the picture of neutrino decoupling from the cosmic plasma. We have calculated the impact on the contribution of neutrinos to the cosmological radiation density, parameterized via the effective number of neutrinos ($N_{\rm eff}$), for some particular cases, including the presence of neutrino non-standard interactions (NSI) with electrons or mixing with a fourth, sterile neutrino state. We show the corresponding bounds on these scenarios from present analyses and future measurements of $N_{\rm eff}$. For instance, we find that future cosmological data would provide competitive and complementary constraints on some of the NSI parameters and their combinations (https://doi.org/10.1016/j.physletb.2021.136508)
There is an interesting connection between early universe cosmology and searches for long-lived particles (LLPs) at the LHC. Light particles can be produced via freeze-in and act as dark radiation, contributing to the effective number of relativistic species $N_\text{eff}$. The parameter space of interest for future CMB missions points to LLP decay lengths in the mm to cm range. These decay lengths lie at the boundary between prompt and displaced signatures at the LHC and can be comprehensively explored only by combining searches for both. We consider a model where the LLP decays into a charged lepton and a (nearly) massless invisible particle. By reinterpreting searches for promptly decaying sleptons and for displaced leptons at both ATLAS and CMS we can then directly compare LHC exclusions with cosmological observables. Our results show how in this model the target value of CMB-S4 is already excluded by current LHC searches.
An unexpected explanation for neutrino mass, Dark Matter (DM) and Dark Energy (DE) from genuine Quantum Chromodynamics (QCD) of the Standard Model (SM) is proposed here, while the strong CP problem is resolved without any need to account for fundamental axions. We suggest that the neutrino sector can be in a double phase in the Universe: i) relativistic neutrinos, belonging to the SM; ii) non-relativistic condensate of Majorana neutrinos. The condensate of neutrinos can provide an attractive alternative candidate for the DM, being in a cold coherent state. We will explain how neutrinos, combining into Cooper pairs, can form collective low-energy degrees of freedom, hence providing a strongly motivated candidate for the QCD (composite) axion. Besides, we propose a novel mechanism for the production of gravitational waves in the early Universe that originates from the relaxation processes induced by the QCD phase transition. While the energy density of the quark-gluon mean-field is monotonously decaying in real time, its pressure undergoes a series of violent oscillations at the characteristic QCD time scales that generate a primordial multi-peaked gravitational waves signal in the radio frequenciesโ domain. The signal is an echo of the QCD phase transition that is accessible by planned measurements at the FAST and SKA telescopes.
The ANTARES high-energy neutrino telescope has operated in its full configuration from May 2008 up to February 2022 with its detector lines anchored at 2500 below the surface of the Mediterranean Sea. The location of ANTARES allowed for an advantageous view of the Southern Sky through neutrino-induced upgoing muons, with a geometrical configuration optimized for neutrino of Galactic origin with energies below 100 TeV. ANTARES searched for cosmic neutrinos using different methods, from looking for a directional excess from a pre-selected list of 121 astrophysical candidates, to a scan over its visible sky without making any assumption about the source position, and to a hunt for an excess of high-energy events over the atmospheric background. Moreover, ANTARES has been involved in a rich multi-messenger program to search for neutrinos in coincidence with promising transient astrophysical events, as well as triggering electromagnetic follow-up observations of interesting candidates by sending alerts to the Astronomical community. Finally, ANTARES has studied atmospheric muon neutrino disappearance due to neutrino oscillations, and has constraints on the 3+1 neutrino models. In this talk, results using almost the full ANTARES data sample will be presented, ranging from searches for cosmic neutrinos, to multi-messenger analyses and the study of neutrino oscillations.
The SN1987A core-collapse supernova was the first extragalactic transient source observed through neutrinos. The detection of the 25 associated neutrinos by the Super-Kamiokande, IMB and Baksan experiments marked the beginning of neutrino astronomy. Since then, neutrino telescopes have not been able to make another observation due to the remoteness of the sources. It is therefore essential to optimize the detection channel of sensitive detectors in case of an upcoming galactic core-collapse supernova. Neutrino observations would, in particular, provide first-hand information about the core-collapse mechanism as well as the behavior of particles in dense environments. In this contribution, we discuss how the uniquely complex structure of the optical modules in the KM3NeT neutrino experiment would allow to observe supernova neutrinos. We present KM3NeTโs sensitivity to galactic supernovae and describe its associated online alert system for multi-messenger studies. Finally, we discuss KM3NeTโs ability to infer the supernova evolution from the time profile of the associated neutrino emission.
Baikal-GVD is a large underwater neutrino detector currently under construction in Lake Baikal, Russia. With an instrumented volume already approaching 0.4 km$^3$ and a sub-degree angular resolution, Baikal-GVD is becoming one of the key players in neutrino astronomy. We review the current status of Baikal-GVD and recent results obtained with the partially complete instrument.
The Pierre Auger Observatory is the world's largest ultra-high-energy cosmic ray observatory. Its hybrid detection technique combines the observation of the longitudinal development of extensive air showers and the lateral distribution of particles arriving at the ground. In this contribution, a review of the latest results on hadronic interactions using measurements from the Pierre Auger Observatory is given. In particular, we report on the self-consistency tests of the post-LHC hadronic models using measurements of the depth of the shower maximum and the main features of the muon component at the ground. The tensions between the model predictions and the data considering different shower observables are reviewed.
In scenarios beyond the Standard Model (BSM) characterised by charged (W') or neutral (Z') massive gauge bosons with large width, resonant mass searches are not very effective, so that one has to exploit the tails of the mass distributions measured at the Large Hadron Collider (LHC). In this case, the LHC sensitivity to new physics signals is influenced significantly by systematic uncertainties associated with the Parton Distribution Functions (PDF), particularly in the valence quark sector relevant for the multi-TeV mass region. As a BSM framework featuring such conditions, we consider the 4-Dimensional Composite Higgs Model (4DCHM), in which multiple W' and Z' broad resonances are present, with strongly correlated properties. By using the QCD tool xFitter, we study the implications on W' and Z' searches in Drell-Yan (DY) lepton decay channels that follow from the reduction of PDF uncertainties obtained through combining high-statistics precision measurements of DY lepton-charge and forward-backward asymmetries. We find that the sensitivity to the BSM states is greatly increased with respect to the case of base PDF sets, thereby enabling one to set more stringent limits on (or indeed discover) such new particles, both independently and in correlated searches.
Many theories beyond the Standard Model predict new phenomena, such as Z, W bosons, KK gravitons, or heavy leptons, in final states with isolated, high-pt leptons (e/mu/tau) or photons. Searches for new physics with such signatures, produced either resonantly or non-resonantly, are performed using the ATLAS experiment at the LHC. This includes a novel search that exploits the lepton-charge asymmetry in events with an electron and muon pair. The most recent 13 TeV pp results will be reported.
Many new physics models, such as the Sequential Standard Model, Grand Unified Theories, models of extra dimensions, or models with eg. leptoquarks or vector-like leptons, predict heavy mediators at the TeV energy scale. We present recent results of such searches in leptonic final states obtained using data recorded by the CMS experiment at Run-II of the LHC.
Many theories beyond the Standard Model predict new phenomena giving rise to multijet final states. These jets could originate from the decay of a heavy resonance into SM quarks or gluons, or from more complicated decay chains involving additional resonances that decay e.g. into leptons. Also of interest are resonant and non-resonant hadronic final states with jets originating from a dark sector, giving rise to a diverse phenomenology depending on the interactions between the dark sector and SM particles. This talk presents the latest 13 TeV ATLAS results.
Many new physics models, e.g., compositeness, extra dimensions, extended Higgs sectors, supersymmetric theories, and dark sector extensions, are expected to manifest themselves in the final states with hadronic jets. This talk presents searches in CMS for new phenomena in the final states that include jets, focusing on the recent results obtained using the full Run-II data-set collected at the LHC.
Leptoquarks are predicted by many new physics theories to describe the similarities between the lepton and quark sectors of the Standard Model and offer an attractive potential explanation for the lepton flavour anomalies observed at LHCb and flavour factories. The ATLAS experiment has a broad program of direct searches for leptoquarks, coupling to the first-, second- or third-generation particles. This talk will present the most recent 13 TeV results on the searches for leptoquarks with the ATLAS detector, covering flavour-diagonal and cross-generational final states.
Many new physics models predict low mass resonances. However, the kinematic thresholds used in the nominal data taking program of CMS pose a difficulty in kinematically accessing these resonances. To overcome this problem, CMS has implemented Data Scouting Techniques that allow trigger thresholds to be lowered by saving a very limited amount of trigger-level event information offline. In this talk, we present the searches that used this data scouting technique in the LHC Run-II data to set some of the strongest constraints to date for low mass resonances in prompt and long-lived signatures.
Analysis workflows commonly used at the LHC experiments do not scale to the requirements of the HL-LHC. To address this challenge, a rich research and development program is ongoing, proposing new tools, techniques, and approaches. The IRIS-HEP software institute and its partners are bringing together many of these developments and putting them to the test in a project called the โAnalysis Grand Challengeโ (AGC).
The AGC aims to demonstrate how novel workflows can scale to analysis needs at the HL-LHC. It is based around a physics analysis using publicly available Open Data and includes the relevant technical requirements and features that analysers at the LHC need. The analysis demonstration developed in this context is heavily based on tools from the HEP Python ecosystem and makes use of modern analysis facilities.
This talk will review the state of the ecosystem, describe the status of the AGC, and showcase how the envisioned workflows look in practice.
A shared, common event data model, EDM4hep, is an integral part of the Key4hep project. EDM4hep aims to be usable by all future collider projects, despite their different collision environments and the different detector technologies that are under discussion. This constitutes a major challenge that EDM4hep addresses by using podio, a C++ toolkit for the creation and handling of event data models, developed in the context of the AIDA R&D program. This approach allows for quick prototyping of new data types and provides a streamlined framework for updates. After presenting an overview of the basic features of EDM4hep and podio, we will discuss the current experience with an initial version of EDM4hep in different physics studies. Additionally, we will present the planned developments that are necessary for a first stable version of EDM4hep, addressing in particular backward compatibility aspects and schema evolution. We will conclude with an outlook on the future developments directions beyond this first stable version.
Detector studies for future experiments rely on advanced software tools to estimate performance and optimize their design and technology choices. The Key4hep project provides a turnkey solution for the full experiment life-cycle based on established community tools such as ROOT, Geant4, DD4hep, Gaudi, podio and spack. Members of the CEPC, CLIC, EIC, FCC, and ILC communities have joined to develop this framework, and merged or are in the progress or merging their respective software environments into the Key4hep stack. The software stack contains the necessary ingredients for event generation, detector simulation with Geant4, reconstruction algorithms, and analysis. Ongoing developments include the integration of the ACTS toolkit for track reconstruction, the PandoraPFA toolkit for clustering and particle flow, and the CLUE package for calorimeter clustering in high-density environments. This presentation will give an overview of the Key4hep project and highlight use cases from the involved communities, showcasing the synergy obtained through the adaptation of this common venture.
Modern HEP experiments are invested heavily in software. The success of physics discoveries hinges on software quality for data collection, processing, analysis, and the ability of users to learn and utilize it quickly. While each experiment has its own flavor of software, it is mostly derived from tools in the common domain. However, most users learn software skills only after joining a research program. Individual universities do not uniformly provide software training to students, prior to beginning HEP research. Embarking on a HEP-specific path presents its own experiment-specific software environment challenges. Given the international nature of HEP experiments, users have a varied level of preparation. I HEP Software Foundation have together developed software training program to respond to the above challenges and a long-term sustainability of the HEP research software ecosystem. The open source and introductory HEP software curriculum and several software modules on techniques and methods for computing and data science has enabled users to jump start research and contribution to the field. The common efforts on training across HEP has helped build a strong sense of community that also includes HEP Theory, Nuclear Physics and Computer Science. The training program has established a platform that can scale and sustain with time and facilitates inclusiveness. It provides intellectual capital and transferable skills that are becoming increasingly important to career in the realm of software and computing, both, inside and outside HEP. This contribution informs about the structure and work of the above efforts via highly visible medium of ICHEP conference.
In recent years, digital object management practices to support findability, accessibility, interoperability, and reusability (FAIR) have begun to be adopted across a number of data-intensive scientific disciplines. These digital objects include datasets, AI models, software, notebooks, workflows, documentation, etc. With the collective dataset at the Large Hadron Collider scheduled to reach the zettabyte scale by the end of 2032, the experimental particle physics community is looking at unprecedented data management challenges. It is expected that these grand challenges may be addressed by creating end-to-end AI frameworks that combine FAIR and AI-ready datasets, advances in AI, modern computing environments, and scientific data infrastructure. In this work, the FAIR4HEP collaboration explores the interpretation of FAIR principles in the context of data and AI models for experimental high energy physics research. We investigate metrics to quantify the FAIRness of experimental datasets and AI models, and provide open source notebooks to guide new users on the use of FAIR principles in practice.
The LHCb experiment is resuming operation in Run3 after a major upgrade. New software exploiting modern technologies for all data processing and in the underlying LHCb core software framework is part of the upgrade. The LHCb simulation framework, Gauss, had to be adapted accordingly, with the additional constraint that it also relies on external simulation libraries. At the same time a decision was taken to consolidate the simulation software and extract all generic components into a new core experiment independent framework, called Gaussino. This new core simulation framework allows easier prototyping and testing of new technologies where only the core elements are affected. It relies on Gaudi for general functionalities and the Geant4 toolkit for particle transport, combining their specific multi-threaded approaches. A fast simulation interface to replace the Geant4 physics processes with a palette of fast simulation models for a given sub-detector is the most recent addition. Geometry layouts can be provided through DD4Hep or experiment specific software. A built-in mechanism to define simple volumes at configuration time and ease the development cycle is also available. A plug&play mechanism for modeling collisions and interfacing generators like Pythia and EvtGen is provided. We will describe the structure and functionality of Gaussino and how the new version of Gauss exploits the Gaussino infrastructure to provide what required for the simulation(s) of LHCb experiment.
The CRESST (Cryogenic Rare Event Search with Superconducting Thermometers) experiment explores with high sensitivity the parameter space of low mass DM candidates, being the pathfinder in the sub-GeV/c2 mass range. CRESST employs different high-purity crystals and operate them at mK temperature as cryogenic calorimeters. The flexibility in employing detectors made of different materials together with the advanced performance of the thermal sensors allow CRESST-III to establish the most stringent limits on spin-dependent and spin-independent low mass DM interactions. In this contribution, the current status of the CRESST-III experiment, together with the most recent dark matter results and findings will be presented. Perspectives for the next phase of the experiment will also be discussed.
SENSEI (Sub-Electron Noise Skipper Experimental Instrument) is pioneering the development of silicon CCDs with sub-electron charge resolution for low-threshold direct detection of dark matter.
These "skipper CCDs" are the first detectors capable of resolving single electrons in each of millions of pixels, and the low thresholds possible with this technology give SENSEI world-leading sensitivity to sub-GeV dark matter.
The SENSEI Skipper-CCDs have measured the lowest rates in silicon detectors of events containing one, two, three, or four electrons.
This results in world-leading sensitivity for a large range of dark matter masses, and significant improvement is expected with the full-scale SENSEI experiment at SNOLAB.
The DAMIC-M (DArk Matter In CCDs at Modane) experiment will use n-type Si skipper CCDs, fully depleted, with a total target mass of about one kilogram. Four individual silicon plates of 6k x 1.5k pixels will be placed in each holder making a total of around 200 CCDs. The skipper amplifier readout allows for several non-destructive measurements of the individual pixel charge, reducing the detection resolution to single electron and thus pushing the energy threshold to the eV-scale. With a significantly larger exposure and lower energy threshold, DAMIC-M will advance by several orders of magnitude the exploration of the dark matter particle hypothesis, in particular of candidates pertaining to the so-called ``hidden sector.โ
A prototype, the Low Background Chamber (LBC), with 25g of low background Skipper CCDs, has been recently installed at LSM and is currently taking data. We will report the status of the DAMIC-M experiment, CCD performance and calibration and first results with LBC commissioning data.
The ANDROMeDa (Aligned Nanotube Detector for Research On MeV Darkmatter) project aims to develop a novel Dark Matter (DM) detector based on carbon nanotubes: the Dark-PMT. The detector is designed to be sensitive to DM particles with mass between 1 MeV and 1 GeV. The detection scheme is based on DM-electron scattering inside a target made of vertically-aligned carbon nanotubes. Carbon nanotubes are made of wrapped sheets of graphene, which is a 2-dimensional meterial: therefore, if enough energy is transferred to overcome the carbon work function, the electrons are emitted directly in the infra-tube vacuum. Vertically-aligned carbon nanotubes have reduced density in the direction of the tube axes, therefore the scattered electrons are expected to leave the target without being reabsorbed only if their momentum has a small enough angle with that direction, which is what happens when the tubes are parallel to the DM wind. This grants directional sensitivity to the detector, a unique feature in this DM mass range. We will report on the construction of the first Dark-PMT prototype, on the establishment of a state-of-the-art carbon nanotube growing facility in Rome, and on the characterizations of the nanotubes with XPS and angular-resolved UPS spectroscopy performed in Sapienza University, Roma Tre University, and at synchrotron facilities. ANDROMeDa was recently awarded a 1Mโฌ PRIN2020 grant with which we aim, over the course of the next three years, to construct the first large-area cathode Dark-PMT prototype with a target of 10 mg of carbon. The main focus of the R&D will be the development of a superior nanotube synthesis capable of producing optimal nanotubes for their use as DM target. In particular, the nanotubes will have to exhibit high degree of parallelism at the nanoscale, in order to minimize electron re-absorption.
The NEWS-G collaboration is searching for light dark matter using spherical proportional counters. Access to the mass range from 50 MeV to 10 GeV is enabled by the combination of low energy threshold, light gaseous targets (H, He, Ne), and highly radio-pure detector construction. Initial NEWS-G results obtained with SEDINE, a 60 cm in diameter spherical proportional counter operating at the Laboratoire Souterrain de Modane (France), excluded for the first time WIMP-like dark matter candidates down to masses of 0.5 GeV.
The construction of a new, 140 cm in diameter, spherical proportional counter constructed at LSM using 4N copper with 500 um electroplated inner layer will be presented, along with its installation and commissioning at SNOLAB (Canada), where it is scheduled to collect data with an improved shielding later this year.
Before the detector was shipped to Canada, a short data-taking campaign was undertaken at LSM using methane. New physics results from this run, leading to world-leading spin-dependent sensitivity will be presented.
Furthermore, the design and construction of ECUME, a 140 cm in diameter spherical proportional counter fully electroformed underground will be discussed. The potential to achieve sensitivity reaching the neutrino floor in light Dark Matter searches with a next generation detector is also summarised.
Diamond sensors (DS) are widely used as solid-state particle detectors, beam loss monitors, and dosimeters in high-radiation environments, e.g., particle colliders. We have calibrated our DS with steady $\beta$- and X-radiation, spanning a dose rate in the range 0.1-100 mGy/sec. Here, we report the first systematic characterization of transient responses of DS to collimated, sub-picosecond, 1 GeV electron bunches. These bunches, possessing a charge ranging from tens to hundreds of pC and a size from tens of microns to millimeters, are suitably provided by the FERMI electron linac in Trieste, Italy. The high density of charge carriers generated by ionization in the diamond bulk causes a transient modification of electrical properties of DS (e.g., resistance), which in turn affects the signal shape. We have modeled a two-step numerical approach, simulating the effects on the signal of both the evolution of charge carrier density in the diamond bulk and the changes in the circuit parameters. This approach interprets features observed in our experimental results to a great extent.
BULLKID is an R&D project on a new cryogenic particle detector to search for low energy processes such as low-mass dark matter and neutrino coherent scattering off nuclei. The detector unit we are building consists in an array of 60 silicon absorbers sensed by phonon-mediated, microwave-multiplexed Kinetic Inductance Detectors (KIDs), with energy resolution on nuclear recoils around 100 eV and total mass of 20 g. The single detector unit is engineered to ensure a straightforward scalability to a future kg-scale experiment. In this talk we will describe this innovative detector concept and the recent encouraging achievements of the project following the operation of the first prototypes.
The proposed high-luminosity high-energy Electron-Ion Collider (EIC) will provide a clean environment to precisely study several fundamental questions in the high energy and nuclear physics fields. A low material budget silicon vertex/tracking detector with fine spatial resolution (hit spatial resolution < 10 $\mu$m) is critical to carry out heavy flavor hadron and jet measurements at the future EIC. Fast timing capability (< 10 ns) helps suppressing backgrounds from neighboring collisions. We will present the design of a proposed Forward Silicon Tracking (FST) detector with the pseudorapidity coverage from 1.2 to 3.5, which can provide both fine spatial and temporal resolutions for the EIC. This detector geometry has been implemented in the GEANT4 simulation in integration with the selected EIC detector: ECCE. The integrated ECCE tracking performance meets the EIC physics requirements and enables a series of high precision heavy flavor measurements especially in the forward pseudorapidity region. Simulation studies for both the FST detector and heavy flavor physics developments will be presented. The Low Gain Avalanche Diode (LGAD), AC coupled LGAD (AC-LGAD) and Depleted Monolithic Active Pixel Sensor (MALTA), which are the EIC silicon detector technology candidates, are under detector R$\&$D. A series of bench tests have been performed for both single prototype sensor and a telescope setup. Progresses and results from the ongoing detector R$\&$D for LGAD, AC-LGAD and MALTA will be presented as well.
Future vertex detectors operating in colliders at very high instantaneous luminosity will face great challenges in the event reconstruction due to the increase in trackย density. In particular the high luminosity LHC phase, with the collider operating at 1.5x10$^{34}$/cm/s, will pose strict requirements on subdetectors capabilities. Concerning the LHCb Upgrade2, 2000 tracks from 40 pp interactions will cross the vertex detector (VELO) at each bunch crossing. To guarantee good detector performance the additional information of the hit time stamping with an accuracy of at least 50ps is needed.ย There are several studies looking for the best technology to achieve this level of timing precision, but a very promising option today is the 3D trench silicon pixel, developed by the INFN TimeSPOT collaboration. This kind of sensorย would allow to build a 4D tracker, capable of excellent resolution in both space and time measurements. Two sensors batches were produced by Fondazione Bruno Kessler (FBK) in 2019 and 2021. The 3D trench silicon pixels have dimensions 55$\mu$mx55$\mu$m and are built on a 150$\mu$m-thick silicon: a 40$\mu$m planar junction is delimited by two continuous bias junctions, with the readout electrode in between. This configuration allows shorter charge carriers drift paths if compared to planar sensors, hence inducing fast signals.ย
The latest beam test with the 3D trench sensors has been performed at SPS/H8 in 2021. By means of low-noise custom electronics boards featuring a two-stage transimpedance amplifier it was possible to test silicon pixels and strips made with the 3D trench technology. To extrapolate the sensorย time resolution, the crossing time of a particle was estimated using two 5.5mm-thick quartz window MCP-PMTs as time tag, with an accuracy of approximately 7ps. The device under test, an additional 3D trench sensor and the two MCP-PMTs were aligned with respect to the expected beam trajectory. In this way it was possible to acquire coincidences between the MCP-PMTs and one of the silicon sensors, allowing to study the second sensor properties in a trigger-less condition. The output waveforms were recorded and analyzed offline by means of dedicated software algorithms: amplitude, time of arrival and efficiency of the signals were estimated and the sensors response makes them a suitable candidate to build a full tracking detector. Preliminary results show that the standard deviation of the core of the pixels time distribution is about 10ps and the tilted sensor have shown an efficiency close to 100%.
In the search for excellent time resolution technologies, the 3D trench silicon pixels have proved to be a promising option for future vertex detectors operating at very high instantaneous luminosity.
Abstract:
Incom Inc is producing a standard version of the Large Area Picosecond Photo-Detector (LAPPD) โ the worldโs largest commercially-available planar-geometry photodetector based on microchannel plates (ALD-GCA-MCPs). It features a stacked chevron pair of โnext generationโ large area MCPs produced by applying resistive and emissive Atomic Layer Deposition (ALD) coatings to glass capillary array (GCA) substrates encapsulated in a borosilicate glass hermetic package. These are available with 10 or 20 ยตm pores.
The fused silica entry window of the detector is coated with a high sensitivity semitransparent bi-alkali photocathode with 373 cm2 detection area.
Signals are read out on either microstrip anodes applied to the inside of the bottom anode plate, or via a capacitively coupled resistive anode. The โbaselineโ devices have demonstrated electron gains of 107, low dark noise rates (15-30 Hz/cm2), single photoelectron (PE) timing resolution less than 50 picoseconds RMS (electronics-limited), and single photoelectron spatial resolution under 1mm RMS (also electronics-limited), high (up to 31%) QE uniform bi-alkali photocathodes and low sensitivity to magnetic fields up to 0.8 T (no tests at higher field have been performed at this time).
Production throughput of baseline tiles have increased from one/month in 2018, to four/month in 2020 to six to eight/month in 2022. The tiles are made from either Borofloat glass or high purity high density ceramic materials. On-going development of a smaller format, 10 cm X 10 cm High Rate Picosecond Photo-Detector (HRPPD) that, in addition to all of the LAPPD attractive features, would have a fully active area with no window support spacers (structural supports), even lower sensitivity to magnetic fields with new 10 ยตm pore MCPs and sub-mm position resolution with a new anode design.
LAPPDs can be employed in particle collider experiments (e.g. SoLID, future EIC), neutrinoless double-beta decay experiments (e.g. THEIA), neutrino experiments (e.g. ANNIE, WATCHMAN, DUNE), medical (PET) and nuclear non-proliferation applications. Currently, LAPPDs have recently or will be tested at Fermilab, ANNIE, BNL, INFN, DESY, CERN and sold to several countries in the EU and domestically in the USA.
We report on the recent progress in the production of the โbaselineโ LAPPD and discuss new developments.
Large Area Picosecond Photodetectors (LAPPDs) are micro-channel based photosensors featuring hundreds of square centimeters of sensitive area in a single package and timing resolution on the order of 50 ps for a single photon detection. However, LAPPDs currently do not exist in finely pixelated 2D readout configurations that in addition to the high-resolution timing would also provide the high spatial resolution required for Ring Imaging CHerenkov (RICH) detectors. One of the recent LAPPD models, the so-called Gen II LAPPD, provides the opportunity to overcome the lack of pixellation in a relatively straightforward way. The readout plane of Gen II LAPPD is external to the sealed detector itself. It is a conventional inexpensive capacitively coupled printed circuit board (PCB) that can be laid out in a custom application-specific way for 1D or 2D sensitive area pixellation. This allows for a much shorter readout-plane prototyping cycle and provides unprecedented flexibility in choosing an appropriate segmentation that then could be optimized for any detector needs in terms of pad size, orientation, and shape. We fully exploit this feature by designing and testing a variety of readout PCBs with conventional square pixels and interleaved anode designs.
Data acquired in the lab with the LAPPD tile 97 provided by Incom will be shown using a laser system to probe the response of several interleaved and standard pixelated patterns. Results from a beam test at Fermilab Test Beam Facility will be presented as well, including worldโs first Cherenkov ring measurement with this type of a photosensor. 2D spatial resolutions well below 1 mm will be demonstrated for several pad configurations. Future plans, including a direct demonstration of e/p/K/p separation by a proximity focusing RICH detector prototype with a LAPPD as a photosensor in a forthcoming beam test at Fermilab in summer 2022, will be discussed.
The onset of the COVID pandemic in 2020 stopped all outreach and educational activities with in-person participation. The ALICE collaboration soon adapted to the new situation imposed by lockdowns and other restrictions. The multitude of online tools and platforms available allowed us to continue reaching out to the public. In-person visits and talks were replaced by virtual visits and virtual talks, done with dedicated equipment and allowing remote audiences to see the experiment and interact with scientists. Masterclasses for high-school students were also adapted and were held online; web-based versions for the analysis programs were developed, making it easy for students at home to take part in this exciting hands-on activity and become scientists for a day. This new format made it possible to reach out to new audiences, both students and general public, who normally would not have the opportunity to travel and participate; it also motivated more colleagues to be involved in outreach. We will discuss how these online activities were implemented and the experience gained.
International Masterclasses (IMC) is a program to engage high school students in authentic one-day particle physics analysis experiences at universities and laboratories worldwide. The program is run under the aegis of the International Particle Physics Outreach Group (IPPOG). The COVID pandemic created challenges for IMC in trying to reach the participants and excite them about cutting-edge science. Meeting those challenges not only addressed immediate issues but also expanded the capacity of IMC to deliver its program in the future. The authors will share what they have learned and prospects for the future.
Physics, especially Astroparticle and Nuclear Physics, is a complex theme to tell kids, because it is very far away from their everyday life and from traditional school subjects. Nevertheless, to answer the demand of materials to support distance learning during the pandemic, the INFN Communications Office widened its activities with interactive live streaming events and online workshops for students: an opportunity to involve kids, between 8 and 13 years old, into the discovery of Physics.
We have fielded our first experience in spring 2020, with the project โArt&Science Kidsโ, a digital version for children of the European project โArt&Science Across Italyโ, organized by INFN and CERN. Furthermore, for the digital edition of โNational Geographic Festival delle Scienze 2020โ in Rome, the project โFisicaxKidsโ was developed as a series of live streaming where researchers met students. The success of these first digital activities for children was the driving force for exploring other formats to engage the young public in Particle Physics. In the context of a continuous toggle between in-person school and distance learning, during 2021 and 2022, INFN Communications Office explored three different modalities to communicate science (interactive online workshop, Q&A live streaming, videos with animated illustrations), developing three different educational formats, to provide teachers with new instruments to engage kids in Physics.
With the purpose of preserving the interaction and the hands-on experience of each participant, interactive online workshops dedicated to students aged 8 to 12 were designed for the online edition of science festivals in Genoa, Rome, and Bergamo.
Inspired by the experience of โFisicaxKidsโ, two new online editions of dialogues with researchers were proposed to students aged 8-12, the first of which was followed by more than 230 classes for each event (approximately 4000 students each). The chosen format was a live streaming where a scientist dialogued with students about his or her research activity supported by a preliminary video, made with the chroma key technique, introducing the topic by using cartoons, animated illustrations, and fascinating infographics. The topics spanned from neutrino and high energy Physics to the Physics of the Universe, such as dark matter, gravitational waves, and antimatter.
The third explored format is represented by two series of short videos titled โLa Fisica tra le Ondeโ. The series had the purpose to draw near Physics and the experimental method to children, by focusing on everyday experiences and by building a storytelling told by its protagonists, three siblings (4, 9, and 11 years old) making the experience of living in a boat with their family. The first series is focused on everyday Physics related to the concept of energy, while the second one explores experimental cosmic ray Physics, supported by a purpose-built detector realized and sent to the boat by INFN. The second series saw the addition of five live meetings between school classes and scientists working on experiments related to the videos. It engaged more than 60 classes per event, approximately 1200 students aged 10 to 13.
The evolution of these three modalities to communicate science and the online educational formats for kids will be presented, focusing on the different narrative structures and modes of interaction. Also, the aspects of communication strategy, kids and teachersโ engagement and evaluation methods will be discussed. The results of evaluation questionnaire will be shown focusing on critical aspects of the digital formats presented, and their communication procedures, in order to keep the discussion open on how they can be evaluated and aligned to school needs and expectations.
The Pierre Auger Observatory in Argentina, built to study the physics of highest-energy cosmic rays, has a tremendous emotional appeal given the Pampa Amarilla environment at 1400 m asl in Mendoza, coupled with the aura of a pioneering experiment that explores the Universe. Here, we present some of the Outreach, Education, and Communication programmes carried out within the international collaboration. Since early times the Observatory has been communicating its existence and purpose through the Visitor Center, where guided tours with supporting presentations are frequently offered to make the experiment, astroparticle physics, and scientific research, in general, accessible to the public. The limitation of the on-site access by the ongoing pandemic fuelled the development of virtual tours and enlarged the number of international communication to the public. The use of high-school students' specific programs has been one of the elements of the upgrade of the Observatory. A part of the detectors used in the upgrade has been built in cooperation with students, setting up the basis for a proficient citizen science program. A fraction of the data processed from the collaboration is made available to the public in a usable format. The collaboration developed special masterclasses, providing the resources to analyse the public data set together with special training sessions at the teachers or students level. Finally, a glance at the worldwide inspirational activities, based on the mentioned emotional appeal of the Observatory and conducted by the International Collaboration members, will be given.
The Bruno Touschek Visitor Centre is a permanent exhibition dedicated to the evolution of particle physics, playing a central role in the popularization of science initiatives conducted at INFN Frascati National Laboratory (LNF). The centre was conceived as a public engagement hub to involve people with the main discoveries and the latest developments in technology and research fields investigated by INFN. Since its opening in 2018, it has been the scene where researchers, citizens, students, teachers, policy makers and different stakeholders share the importance of the evolution of science and its applications.
In the itinerary path, the history of accelerator machines starting from AdA, the first storage ring accelerating matter and antimatter, designed and realized in Frascati, up to the future perspectives in particle acceleration, the development of particle detectors and the applications of research and technology-based outcomes in everyday life are presented. The exhibition features the exposition of instruments, interactive exhibits and immersive elements that will be described in this contribution. The centre is part of the LNF diffused museum that includes a Cockcroft-Walton accelerator, a section of the Adone storage ring, the KLOE experiment, lately enriched by a digital installation based on video mapping and NAUTILUS, a gravitational waves detector.
Close to the exhibition hall, it has recently been realized a laboratory, named EduLab, devoted to hands-on activities and science demonstrations addressed to pupils of Primary and Middle Schools.
The Bruno Touschek Visitor Centre and EduLab are hosting both formal and informal science education events, held either in person or virtually, providing, in this latter case, online resources that extend the accessibility of LNF outreach projects and enhance lifelong learning.
The Visitor Centre itinerary path, the related public engagement activities and their societal impact are here presented.
High Energy Accelerator Research Organization (KEK) was founded in 1971 and celebrated its 50th anniversary in 2021. Due to the novel coronavirus pandemic, it has been very difficult to conduct in-person outreach activities as we had envisioned, but we have responded by moving some of our programs online.
In this talk, I would like to report on some of the innovations and difficulties we faced in bringing our public outreach online, as well as the results of our hybrid 50th anniversary symposium.
CERN is currently preparing Science Gateway, a new facility for scientific education and outreach that will open in summer 2023. Besides inspirational exhibition spaces, science shows and online education activities, Science Gateway will feature educational labs for hands-on scientific experiments for audiences aged 5 - 105. The design principles of Science Gatewayโs education activities aim to empower people of all backgrounds to engage with the discoveries, the science and the technologies at CERN.
Activities are designed to bring learners in contact with topics that are linked to CERN using authentic research equipment under guidance from volunteers from CERNโs scientific community. The Hands, Head & Heart approach enables participants to actively take part in hands-on manipulations (Hands), cognitively through surprising observations and educational explanations (Head), and affectively through their positive experiences with volunteers (Heart).
The educational goals of Science Gateway are (1) creating memorable impressions related to STEM (science, technology, engineering and math), (2) fostering positive attitudes towards STEM professionals and STEM careers, (3) raising the awareness and understanding of nature of science and scientific methods, and (4) promoting the value of fundamental science.
This talk will provide an overview of the educational offer foreseen at Science Gateway including the hands-on labs, science shows and online education content. In the context of the new educational labs, a โPower of Airโ hands-on activity involving 3D-printed hovercraft and toy balloons will be presented. This activity is designed to raise awareness of how engineers at CERN exploit the power of air to move 1000+ tonne detector slices by means of an air pad system.
We show that a recently discovered non-perturbative field-theoretical mechanism giving mass to elementary fermions, is also capable of generating a mass for the electro-weak bosons and can thus be used as a viable alternative to the Higgs scenario. A detailed analysis of this remarkable feature shows that the non-perturbatively generated fermion and $W$ masses have the parametric form $m_{f}\sim C_f(\alpha)\Lambda_{RGI}$ and $M_W\sim g_w c_w(\alpha)\Lambda_{RGI}$, respectively, where the coefficients $C_f(\alpha)$ and $c_w(\alpha)$ are functions of the gauge couplings, $g_w$ is the weak coupling and $\Lambda_{RGI}$ is the RGI scale of the theory. In view of these expressions, we see that to match the experimental value of the top quark and $W$ masses, we need to conjecture the existence of a yet unobserved sector of massive fermions subjected, besides ordinary Standard Model interactions, to some kind of super-strong gauge interaction, so as to have the RGI scale of the whole theory in the TeV region. Though limited in its scope (in this talk we ignore hypercharge and leptons and discuss only the case of one family, neglecting weak isospin splitting), this approach opens the way to a solution of the mass naturalness problem and an understanding of the fermion mass hierarchy.
A new criterion to extend the Standard Model (SM) of particle physics is proposed: the symmetries of physical microscopic forces originate from the automorphism groups of main CayleyโDickson algebras, from complex numbers to octonions and sedenions. This correspondence leads to a natural and minimal enlargement of the color sector, from a SU(3) gauge group to an exceptional Higgs-broken G(2) group. In this picture, an additional ensemble of massive G(2)-gluons emerges, which is separated from the particle dynamics of the SM and might play the role of dark matter (DM). A fully Lagrangian approach is provided, along with the description of the breaking mechanism, the G(2) particle spectrum, the possible composite DM states and their stability examination. Moreover, G(2) gauge theory could guarantee peculiar manifestations in astrophysical compact objects, which can be observed in the future studying gravitational waves.
We present a new grand unification paradigm, where gauge couplings do not need to be equal at any give scale, instead they run towards the same fixed point in the deep ultraviolet. We provide a concrete example based on SU(5) with a compactified extra space dimension. By construction, fermions are embedded in different SU(5) bulk fields, hence baryon number is conserved and proton decay is forbidden. The lightest Kaluza-Klein tier consists of new states that cannot decay into standard model ones. The lightest massive state can play the role of Dark Matter, produced via baryogenesis, for a Kaluza-Klein mass of about 2.4 TeV. The model also has an interesting and predictive flavour structure.
Lattice simulations suggest that the spectrum of observable particles in BSM-like theories may be different than naively expected using standard methods.
We consider a GUT-like toy theory, which (despite its simplicity) shows qualitative discrepancies arising from non-trivial field theoretical effects, even at weak coupling.
These effects arise as an immediate consequence of the principle of gauge invariance but can typically be ignored in Standard Model calculations. For BSM scenarios a new approach may be needed, which takes such nontrivial effects into account. As a first step, we investigate the spectrum and phase diagram of a SU(3) Yang-Mills theory coupled to a scalar "Higgs" in the fundamental representation.
A supersymmetric extension of the Standard Model is presented, that results from the dimensional reduction of the $N=1$, $10D$ $E_8$ gauge group over a $M_4\times B_0/Z_3$ space, where $B_0$ is the nearly-Kaehler manifold $SU(3)/U(1)\times U(1)$ and $Z_3$ is a freely acting discrete group on $B_0$. The $4D$ theory -after the dimensional reduction and Wilson flux breaking- is an $N=1$ trinification with two $U(1)$s. Below the unification scale the surviving theory is a split-like supersymmetric version of the Standard Model with two global $U(1)$s. At the TeV region we have a NMSSM-like model with promising phenomenology. The talk will be based on our work Phys.Lett.B 813 (2021) 136031, 2009.07059 [hep-ph] and an ongoing 2-loop analysis.
The โflavor problemโ represents one of the greatest challenges of particle model building since SM does not provide neither โa prioriโ explanation of the number of fermion generations nor on their mass and mixing patters, which appear to be very different in the lepton and quark sector. Discrete non-abelian symmetries have gathered a lot of attention as candidates for the solutions of the latter problems. In this talk, I will revise the latest results achieved by Modular Symmetries in the description of fermion masses and mixings, showing that this recently proposed framework is particularly suitable for a unified description of leptons and quarks.
We propose that the electroweak and flavour quantum numbers of the Standard Model (SM) could be unified at high energies in an SU(4)รSp(6)LรSp(6)R anomaly-free gauge model. All the SM fermions are packaged into two fundamental fermion fields, thereby explaining the origin of three families. The SM Higgs, being electroweakly charged, necessarily becomes charged also under flavour when embedded in the UV model. It is therefore natural for its vacuum expectation value to couple only to the third family. The other components of the UV Higgs fields are presumed heavy. Extra scalars are needed to break this symmetry down to the SM, which can proceed via `flavour-deconstructed' gauge groups. When the heavy Higgs components are integrated out, realistic quark Yukawa couplings with in-built hierarchies are naturally generated without any further ingredients.
Measurements of heavy-flavor hadron production in heavy-ion collisions provide a powerful tool to study both initial-state effects on heavy-quark production and final-state interactions between heavy-quarks and the quark-gluon plasma. These measurements are performed with the ATLAS detector at the LHC and capitalize on the large statistics of the Run 2 Pb+Pb dataset. This talk presents new results on the azimuthal anisotropy of muons from heavy-flavor decays in Pb+Pb collisions, as well as new results on the nuclear modification factor for heavy-flavor muons. New results sensitive to the role of parton mass and flavour on jet quenching using b-jets will be also presented. b-jets are identified through the semileptonic decays of $B$-hadrons into muons, and the measured suppression is compared to those for inclusive jets. Furthermore, final measurements of quarkonia impression to probe the QGP medium properties are disucssed. The presented measurements are systematically compared to state-of-the-art theoretical models.
We review recent CMS results on heavy flavour hadron production, including quarkonia, in heavy ion collisions.
Charmonium production is a probe sensitive to deconfinement in nucleus-nucleus collisions. The production of J/$\psi$ via regeneration within the QGP or at the phase boundary has been identified as an important ingredient for the description of the observed centrality and $p_{T}$ dependence at the LHC. $\psi$(2S) production relative to J/$\psi$ is one possible discriminator between the two different regeneration scenarios. At RHIC and at the LHC, there is so far no significant observation of the $\psi$(2S) in nucleus-nucleus collisions in central events at low transverse momentum, where regeneration is the dominating process. The combined Run 2 data set of ALICE allows to extract a significant $\psi$(2S) signal in such a kinematic region at forward rapidity in the dimuon decay channel. In this contribution, we present for the first time results on the $\psi$(2S)-to-J/$\psi$ double ratio and the $\psi$(2S) nuclear modification factor in Pb-Pb collisions at $\sqrt{s_{NN}} = 5.02$ TeV, calculated with respect to a new pp reference with improved precision. Results are compared with model calculations.
Measurements of quarkonia production in peripheral and ultra-peripheral heavy-ion collisions are sensitive to photon-photon and photon-nucleus interactions, the partonic structure of nuclei, and to the mechanisms of vector-meson production. LHCb has studied both coherent and incoherent production of $J/\psi$ mesons in peripheral and ultra-peripheral collisions using PbPb data at forward rapidity with the highest precision currently accessible. In addition, measurements of $Z$ production in $p$Pb collisions provide new constraints on the partonic structure of nucleons bound inside nuclei. Here we will present these measurements of quarkonia and Z production, along with comparisons with the latest theoretical models.
Polarization and spin-alignment measurements represent an important tool for the understanding of the particle production mechanisms occurring in protonโproton collisions. When considering heavy-ion collisions, quarkonium polarization could also be used to investigate the characteristics of the hot and dense medium (quark-gluon plasma) created at the LHC energies. In ALICE, this observable was extracted for the first time in Pb-Pb collisions and a significant difference with respect to a corresponding pp measurement of LHCb was found. This discrepancy could be related to the modification of the J/$\psi$ feed-down fractions, due to the suppression of the excited states in the QGP, but also to the contribution of the regenerated J/$\psi$ in the low $p_{T}$ region. Moreover, it has been hypothesized that quarkonium states could be polarized by the strong magnetic field, generated in the early phase of the evolution of the system, and by the large angular momentum of the medium in non-central heavy-ion collisions. This kind of information can be assessed by defining an ad hoc reference frame where the quantization axis is orthogonal to the event plane of the collision.
In this contribution, the first result of J/$\psi$ polarization with respect to the event-plane in Pb--Pb collisions at $\sqrt{s_{NN}} = 5.02$ TeV will be presented. The $p_{T}$-differential measurement was performed at forward rapidity (2.5 $<$ y $<$ 4) and the results will be shown for different centrality classes. The preliminary measurement of the $\Upsilon$ polarization in pp collisions at $\sqrt{s} = 13$ TeV as a function of the transverse momentum will also be discussed.
We present the current global analysis of nuclear parton distribution functions (PDFs) with the nCTEQ approach. Recent LHC data on W/Z-boson, single-inclusive hadron (SIH) and heavy quark/quarkonium (HQ) production are shown to constrain not only the gluon density down to $x\geq10^{-5}$, but to also influence the strange quark density. The consistency with neutrino deep-inelastic scattering (DIS) and charm dimuon production experiments, the impact of the underlying proton PDFs and of target mass and other corrections is also discussed.
Since the beginning of 2012, the Borexino collaboration has been reporting precision measurements of the solar neutrino fluxes emitted in the proton--proton chain and in the Carbon-Nitrogen-Oxygen cycle. The solar neutrino interaction rate time series exhibits the annual sinusoidal modulation due to the Earth's elliptical orbit. Other modulations could point to neutrino physics beyond the Standard Model. Using Borexino Phase-II and Phase-III data, we search for signals between one cycle/year and one cycle/day. Using a frequency analysis performed with a generalized version of Lomb-Scargle periodogram (GLS) and an unconstrained sinusoidal fit, we are sensitive to the ellipticity of the Earth's orbit at more than $5\sigma$ significance using solar neutrinos only and we exclude other significant sinusoidal signal in the Borexino time series other than the annual modulation.
Weak neutrino and antineutrino signals from astrophysical sources can be investigated with high sensitivity with large underground ultrapure liquid scintillators. The largest amount of detected antineutrinos at Earth is emitted in the natural radioactive decays by $^{40}$K and of $^{232}$Th and $^{238}$U chains isotopes, while supernovae explosions, gamma ray bursts, GW events and solar flares are among possible extra-terrestrial sources of antineutrinos. Borexino has clearly detected the geo-neutrino flux, measured the mantle signal and constrained the overall production of radiogenic heat to 38.2 $^{+13.6} _{-12.7}$ TW. The extreme radiopurity of the detector has also allowed to get the best upper limits on all flavor antineutrino fluences in the few MeV energy range from gamma-ray bursts and from gravitational wave events and to set limits on the diffuse supernova antineutrino background in the unexplored energy region below 8 MeV.
Recently, Borexino has published the search for possible events in correlation, within a time window of ยฑ1000 s, with several of the most intense fast radio bursts (FRBs). In parallel, specific energy shapes have been searched for in the full exposure spectrum of the Borexino detector. By combining these methods, the strongest upper limits on FRB-associated neutrino fluences of all flavors have been obtained in the 0.5โ50 MeV neutrino energy range.
The talk is aimed to summarise Borexino results on geoneutrinos and on the possible signals from astrophysical sources, with a particular focus on the new search for FRB-associated neutrinos.
The next generation water-Cherenkov detector, Hyper-Kamiokande (Hyper-K), is currently under construction in Japan and it is expected to be ready for data taking in 2027. Thanks to its huge fiducial volume and high statistics, Hyper-K will contribute to many investigations such as CP-violation, determination of neutrino mass ordering and potential observations of neutrinos from astrophysical sources. To increase the sensitivity of the detector, Hyper-K will have a hybrid configuration of photodetectors: thousands of 20-inch photomultipliers tubes (PMTs) will be combined with modules containing smaller PMTs arranged inside a pressure vessel, called multi-PMT modules. Many efforts are on-going to reduce the expected dark counts for a detector geometry which includes both photodetector modules. We report the details and the present performances of multivariate analysis techniques, such as Boosted Decision Tree, that are currently being applied to simulated events to reduce the overall dark rates of the detector, which is significantly important for Hyper-K's sensitivity to low energy neutrinos.
Water Cherenkov neutrino experiments have played a crucial role in neutrino discoveries over the years and provide a well-established and affordable way to instrument large target masses. The largest uncertainty in the most recent T2K oscillation results are from the Super-Kamiokande detector systematic errors in the oscillated event samples. As neutrino experiments move from discovery to precision measurements, a comprehensive understanding of water Cherenkov detectors becomes increasingly important. To help address these uncertainties in experiments such as T2K and Hyper-Kamiokande, a test bed for current and new technologies will be constructed at CERN.
The test bed; the Water Cherenkov Test Experiment (WCTE); a small-scale water Cherenkov detector which will be in the T9 experimental area at CERN. WCTE will be used to study the water Cherenkov detector response to hadron, electron, and muon beams, and will use new photosensor technologies. The detector will be instrumented with multi-PMT (mPMT) modules consisting of 19, 3-inch PMTs each, and will test a newly developed calibration deployment system. Calibration techniques with known particle fluxes will be used to demonstrate a 1% level calibration for GeV scale neutrino interactions. Other measurements will include those of Cherenkov light production, pion scattering and secondary neutron production, to provide direct inputs to the T2K, Super-Kamiokande and Hyper-Kamiokande experiments. This talk will discuss the current oscillation results from T2K and the predictions for Hyper-Kamiokande, alongside describing the WCTE detector design, the newly developed mPMT and calibration hardware and the all-important physics program.
Super-Kamiokande (SK) is the world's largest underground water Cherenkov detector which has been studying the atmospheric neutrino oscillations since 1996. Atmospheric neutrinos are famous for covering a wide energy range, have both neutrinos and antineutrinos, with electron and muon flavours, which oscillate to tau neutrinos and are sensitive for matter effects in the earth.
In this talk we would like to present updated results on atmospheric neutrino oscillations using five SK periods (data collected from SK-I to SK-V, years 1996-2020). The data analysis has beed improved by expanding the fiducial volume (FV) of the SK, by adding neutrino interactions taking place 1m from the detector walls. This allowed us to increase the data statistics up to 20 %, and thanks to improvement to the reconstruction algorithms we were able to keep systematics uncertainties still satisfactory.
The KM3NeT collaboration is currently deploying two neutrino detectors at the bottom of the Mediterranean Sea: KM3NeT/ARCA, optimised for neutrino astronomy in the TeV to PeV range, and KM3NeT/ORCA, designed for GeV neutrino detection. The latter one is expected to be completed at the 2025 horizon with 115 string-like vertical Detection Units (DU) arranged in a cylindrical array. It will offer a competitive sensitivity to determine the Neutrino Mass Ordering (NMO) and atmospheric neutrino oscillation parameters.
An early configuration of the detector, consisting of only 6 DU's, was operated for 2 years during 2020 and 2021. Although the reconstruction performance of this setup is limited comparing to the expected performance of the full instrumented detector, it already allowed for the extraction of a high purity neutrino sample. In this contribution, we will present the measurement of neutrino oscillation parameters $\theta_{23}$ and $\Delta{m}^{2}_{23}$, as well as first sensitivity to determine the neutrino mass ordering based on this data sample.
Upcoming neutrino experiments will not only constrain oscillation parameters with an unprecedented precision, but also will search for physics beyond the Standard Model. KM3NeT/ORCA is an atmospheric neutrino detector currently under construction, sensitive to energies from a few GeV to around 100 GeV and with a great potential to explore new physics. A high-purity neutrino sample from data taken with the first 6 Detection Units deployed has been selected. This sample has been analysed to probe sub-dominant effects in the oscillation patterns of atmospheric neutrinos propagating through the Earth, as invisible neutrino decay and Non-Standard Interactions (NSI). In this contribution, the bounds obtained in the decay parameter, $\alpha_3=m_3/\tau_3$, and in the flavour violating interaction parameters, $\epsilon_{\alpha \beta}$, will be shown together with future sensitivity perspectives with ten years of data taking with the final ORCA configuration of 115 Detection Units.
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) features a sophisticated two-level triggering system composed of the Level 1 (L1), instrumented by custom-design hardware boards, and the High Level Trigger (HLT), a software based trigger based on the complete event information and full detector resolution. The CMS L1 Trigger relies on separate calorimeter and muon trigger systems that provide jet, e/ฮณ, ฯ, and muon candidates along with calculations of energy sums to the Global Trigger, where selections are made based on the candidate kinematics. During its second run of operation, the L1 trigger hardware was entirely upgraded to handle proton-proton collisions at a center-of-mass energy of 13 TeV with a peak instantaneous luminosity of $2.2 \cdot 10^{34} cm^{-2}s^{-1}$, more than double the design luminosity for the machine. In view of Run 3 of the LHC, an optimized trigger menu on both the L1 and HLT sides is crucial to successfully record the events necessary to the achievement of the ambitious CMS physics program. A wide range of measurements and searches will profit from the new features and strategies implemented in the trigger system. Dedicated variables and non-standard trigger techniques to target Long Lived Particles searches and measure unconventional physics signatures have been developed. Moreover, the implementation of new kinematic computations in the L1 Global Trigger will improve b-physics measurements and resonance searches. This talk will present these new features and their expected performance measured on benchmark physics signals.
The ATLAS Trigger in Run 3 is expected to record on average around 1.7
kHz of primary 13.6 TeV physics data, along with a substantial
additional rate of delayed data (to be reconstructed at a later date)
and trigger-level-analysis data, surpassing the instantaneous data
volumes collected during Run 2.
Events will be selected based on physics signatures such as the presence
of energetic leptons, photons, jets or large missing energy. New in the
Level 1 (L1) trigger are the New Small Wheel and BIS78 chambers, in
combination with new L1Muon endcap sector logic and MUCTPI. In addition,
a new L1Calo system based around eFEX, jFEX and gFEX systems for egamma,
tau, jets and missing energy will be under commissioning in 2022. In the
High Level Trigger, the ATLAS physics menu was re-implemented from
scratch using a new multi-threaded framework.
We will present first results from the early phases of commissioning the
Run 3 trigger in 2022. We will describe the ATLAS Run 3 trigger menu,
and how it differs from Run 2. Exploring how rate, bandwidth, and CPU
constraints are integrated into the menu. Improvements made during the
run to react to changing LHC conditions and data taking scenarios will
be discussed and we will conclude with an outlook on how the trigger
menu will evolve with the continued commissioning on the new L1 systems.
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the standard model as well as searches for new physics beyond the standard model. Such precision measurements and searches require information-rich datasets with a statistical power that matches the high-luminosity provided by the Phase-2 upgrade of the LHC. Efficiently collecting those datasets will be a challenging task, given the harsh environment of 200 proton-proton interactions per LHC bunch crossing. For this purpose, CMS is designing an efficient data-processing hardware trigger (Level-1) that will include tracking information and high-granularity calorimeter information. Trigger data analysis will be performed through sophisticated algorithms such as particle flow reconstruction, including widespread use of Machine Learning. The current conceptual system design is expected to take full advantage of advances in FPGA and link technologies over the coming years, providing a high-performance, low-latency computing platform for large throughput and sophisticated data correlation across diverse sources. The expected impact on the physics reach of the experiment will be summarized in this presentation, and illustrated with selected benchmark channels.
Events with muons in the final state are fundamental for detecting a large variety of physics processes in the ATLAS Experiment, including both high precision Standard Model measurements and new physics searches. For this purpose, the ATLAS Muon Trigger has been designed and developed into two levels: a hardware based system (Level-1) and a software based reconstruction (High Level Trigger). They have been optimized to keep the trigger rate as low as possible while maintaining a high efficiency, despite the increased particle rates and pile-up conditions at the LHC. An overview of the muon triggering strategies will be provided, showing the performance in Run 2 data of both Level 1 and High Level Trigger. The most recent improvements implemented for Run 3 will also be presented.
The performance of the Inner Detector tracking trigger of the ATLAS experiment at the LHC is evaluated for the data taking period of Run-2 (2015-2018). The Inner Detector tracking was used for the muon, electron, tau and b-jet triggers, and its high performance is essential for a wide variety of ATLAS physics programs such as many precision measurements of the Standard Model and searches for new physics. The detailed efficiencies and resolutions of the trigger in a wide range of physics signatures are presented for the Run 2 data. From the upcoming Run-3, starting in 2022, the application of Inner Detector tracking in the trigger is planned to be significantly expanded, in particular full-detector tracking will be utilized for hadronic signatures (such as jets and missing transverse energy triggers) for the first time. To meet computing resource limitation, various
improvements, including machine-learning based track seeding, have been developed.
The global feature extractor (gFEX) is a component of the Level-1 Calorimeter trigger Phase-I upgrade for the ATLAS experiment. This new high-speed electronics system is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in hard real time at the LHC crossing rate. The single board is packaged in an Advanced Telecommunications Computing Architecture (ATCA) module and implemented as a fast reconfigurable processor based on three Xilinx Vertex Ultrascale+ FPGAs, along with a Xilinx Zynq Ultrascale+ Multi-Processor System on Chip (MPSoC). The board will receive coarse-granularity information from all the ATLAS calorimeters on optical fibers with the data transferred at the 40 MHz Large Hadron Collider (LHC) clock frequency. The gFEX is controlled and monitored by the Zynq MPSoC which configures the processor FPGAs, implements a Linux operating system, as well as the on-board Detector Control System, as well as provides an interface to external signals. This talk will focus on the design of the gFEX system, its commissioning, installation, and integration tests with ATLAS, as well as the expected physics impacts of the new approach to triggering for the ATLAS experiment.
The radiative and electroweak-penguin $B$ decays mediated via $b\to s\gamma$ and $b\to s\ell^+\ell^-$ transitions, respectively, are sensitive to new physics since new heavy particles can enter in the loop altering decay branching fractions and other kinetic observables. Search for the lepton-flavor violation and the test of lepton-flavor universality through these processes are also of topical interest. In this presentation, we report measurements of $B \to K^{*}\tau\tau$, $B\to K\tau\ell$, $B\to K^{0}_{S}K^{0}_{S}\gamma$ and $B\to p\Lambda\gamma$ using $772\times 10^{6}$ $B$-meson pairs recorded by the Belle experiment at the KEKB asymmetric-energy $e^{+}e^{-}$ collider. The talk may also cover a few topics on other rare $B$ and $B_{s}$ decays from Belle.
New particles beyond the standard model (SM) can affect the standard model processes by taking part in quark loops in the diagrams. The b-->s l l transition is knows to be a sensitive laboratory for such contributions, and several deviations from SM predictions were recently reported. In this talk, the recent CMS results on B meson decays involving b-->s l l process are discussed.
Even though the LHC searches did not unveil the new physics particles so far, observations made at LHCb and the $B$-factories point towards lepton flavor universality violation in both tree-level and loop-induced $B$-meson semileptonic decays. A minimal solution to this problem is to combine two scalar leptoquarks (LQ) with $\mathcal{O}(1~\mathrm{TeV})$ masses. We will show that there are only two such scenarios that are consistent with both low- and high-energy constraints, and they are given by a combination of the scalar $S_3=(\bar{3},3,1/3)$ with $R_2=(3,2,7/6)$, or with $S_1=(\bar{3},1,1/3)$. We will discuss the main opportunities to test these two scenarios in current and future experiments. Particular emphasis will be given to Lepton Flavor Violation in low-energy processes.
Flavour-Changing Neutral-Current processes, such as decays mediated by b-> sll transitions, are forbidden at the lowest perturbative order in the Standard Model (SM) and hence might receive comparatively large corrections from new particles in SM extensions. These corrections may affect different observables related to these decays such as branching fractions or angular distributions. The most recent results from LHCb in the area of b-> sll decays will be presented.
We will discuss the time-dependent analysis of $B\to P(S)\ell\ell$ taking into account the time evolution of the $B_{d}$ meson and its mixing into $\bar{B}_d$. The inclusion of time evolution allows us to identify six new observables.
We also show that these observables could be obtained by time-integrated measurements in a hadronic environment if flavour tagging is available.
We
provide simple and precise predictions for these observables in the SM and in NP models with
real contributions to SM and chirally flipped operators, which are independent of form factors and
charm-loop contributions.
As such, these observables provide robust and powerful cross-checks
of the New Physics scenarios currently favoured by global fits to $b\to s\ell\ell$ data.
We discuss the sensitivity of these observables with respect to NP scenarios involving CP-violating phases. This talk will be based on arXiv:2008.08000 and currently on-going works.
In this talk we present the most recent extraction of unpolarized transverse-momentum-dependent (TMD) parton distribution functions (PDFs) and TMD fragmentation functions (FFs) from global data sets of Semi-Inclusive Deep-Inelastic Scattering (SIDIS), Drell-Yan and Z boson production. The fit is performed at the N3LL logarithmic accuracy in the resummation of qT-logarithms and features flexible non-perturbative functions, which allow to reach a very good agreement with the experimental data. In particular, we address the tension between the low-energy SIDIS data and the theory predictions, and explore the impact of the precise LHC data on the ๏ฌt results.
This talk presents results, based on the papers
arXiv:2201.07114 [hep-ph], arXiv:2109.12051 [hep-ph],
Phys. Lett. B 806 (2020) 135478 [arXiv:2002:12810 [hep-ph]],
on the determination of the TMD parton distributions and
rapidity evolution kernel from transverse momentum spectra. It
is shown that the bias induced by PDF in TMD extractions
is alleviated if PDF uncertainties are taken into account and
the TMD profile is flavor-dependent. Both points improve the
agreement between theory and experiment, substantially
increase the uncertainty in extracted TMD distributions, and
should be taken into account in future global analyses.
The Parton Branching (PB) approach provides a way to obtain transverse momentum dependent (TMD) parton densities. Its equations are written in terms of splitting functions and Sudakov form factors and can be solved with Monte Carlo methods. Even though the transverse momentum is known in every branching, the PB method currently uses the DGLAP splitting functions, which assume that the parton has no transverse momentum. We propose to extend the PB method by including TMD splitting functions, a concept from high-energy factorization.
We present the evolution equations and the connection to DGLAP evolution equations and BFKL evolution equation. We show their solutions obtained with a Monte Carlo Simulation and show numerically the effects that TMD splitting functions have on the TMD distribution functions.
Azimuthal single- and double-spin asymmetries measured at Hermes in semi-inclusive leptoproduction of pions, charged kaons, protons, and antiprotons from a transversely polarized hydrogen target are presented. The results of a re-analysis of the previously published Collins and Sivers asymmetries, extended to include protons and antiprotons as well as an extraction in a multi-dimensional binning and enlarged phase space, is reported along with the corresponding results for the remaining single- and double-spin asymmetries associated to the semi-inclusive deep-inelastic scattering process with a transversely polarized target. Among those results, significant non-vanishing ${\rm cos}โก(ฯ-ฯ_s)$ modulations provide evidence for a sizable worm-gear distribution $g_{1T}$. Most of the other modulations are found to be consistent with zero with the notable exception of large ${\rm sin}(โกฯ_s)$ modulations for charged pions and positively charged kaons.
The lepton-jet momentum imbalance in deep inelastic scattering events offers a useful set of observables for unifying collinear and transverse-momentum-dependent frameworks for describing high energy Quantum Chromodynamics interactions. A recent first measurement was made [1] of this imbalance in the laboratory frame using positron-proton collision data recordedf with the H1 experiment at HERA in the years 2006-2007. Using a new machine learning method, the measurement was performed simultaneously and unbinned in eight dimensions. The first results were presented as a set of four one-dimensional projections onto key observables. This work extends over those results by making use of the multi-differential nature of the unfolded result. In particular, distributions of lepton-jet correlation observables are studied as a function of the kinematic properties of the scattering process, i.e. as a function of the momentum transfer $Q^2>150$ GeV$^2$ and the inelasticity $0.2< y< 0.7$.
H1prelim-22-031
[1] arxiv:2108.12376, accepted by PRL
This talk presents results, recently obtained in the papers Eur. Phys. J. C 82 (2022) 36 [arXiv:2112.10465 [hep-ph]] and arXiv:2204.01528 [hep-ph], on azimuthal correlations in di-jet and Z+jet processes at large transverse momenta. The results are computed by matching Parton - Branching (PB) TMD parton distributions and showers with NLO calculations via MCatNLO. It is observed that the different patterns of Z+jet and dijet azimuthal correlations can be used to search for potential factorization-breaking effects in the back-to-back region, which depend on the different color and spin structure of the final states and their interferences with the initial states. The role of theoretical uncertainties is examined by performing variations of the factorization scale, renormalization scale and matching scale. A comparative study of matching scale uncertainties is presented for the cases of PB-TMD and collinear parton showers.
We study the polar and azimuthal decay angular distributions of $J/\psi$ mesons produced in semi-inclusive, deep-inelastic electron-proton scattering. For the description of the quarkonium formation mechanism, we adopt the framework of NRQCD, with the inclusion of the intermediate color-octet channels that are suppressed at most by a factor $v$ in the velocity parameter $v$ relative to the leading color-singlet channel. We put forward factorized expressions for the helicity structure functions in terms of transverse momentum dependent gluon distributions and shape functions, which are valid when the $J/\psi$ transverse momentum is small with respect to the hard scale of the process. By requiring that such expressions correctly match with the collinear factorization results at high transverse momentum, we determine the perturbative tails of the shape functions and find them to be independent of the $J/\psi$ polarization. In particular, we focus on the $\cos 2 \phi$ azimuthal decay asymmetry, which originates from the distribution of linearly polarized gluons inside an unpolarized proton. We therefore suggest a novel experiment for the extraction of this so-far unknown parton density that could be performed, in principle, at the future Electron-Ion Collider.
Euclid is a mission of the European Space Agency designed to constrain the properties of dark energy and gravity via weak gravitational lensing and galaxy clustering. It will carry out a wide area imaging and spectroscopy survey (the Euclid Wide Survey: EWS) in visible and near- infrared bands, covering approximately 15 000 deg2 of extragalactic sky in six years. Euclid will be equipped with a 1.2 m diameter Silicon Carbide (SiC) mirror telescope made by Airbus (Defence and Space) feeding 2 instruments, VIS and NISP, built by the Euclid Consortium : a high quality panoramic visible imager (VIS), a near infrared 3-filter (Y, J and H) photometer (NISP-P) and a slitless spectrograph (NISP-S). This talk will present the satellite and its instruments, which are optimised for pristine point spread function and reduced stray light, producing very crisp images, as well as the survey strategy, the global scheduling and the preparations to commission the satellite and the preparation of the Science Data Centers to produce scientific data
Euclid will observe 15,000 deg2 of the darkest sky that is free of contamination by light from our Galaxy and our Solar System. Three โEuclid Deep Fieldsโ covering around 40 deg2 in total will be also observed extending the scientific scope of the mission the high-redshift universe. The complete survey represents hundreds of thousands images and several tens of Petabytes of data. About 10 billion sources will be observed. With these images Euclid will probe the expansion history of the Universe and the evolution of cosmic structures by measuring the modification of shapes of galaxies induced by gravitational lensing effects of dark matter and the 3-dimension distribution of cosmic structures from spectroscopic redshifts of galaxies and clusters of galaxies. This talk will present the implications for cosmology and cosmological constraints of this unprecedented data set. Of particular interest are expected constraints on neutrino properties, neutrino masses and the nature of dark energy.
With the immense number of images, data and sources that Euclid will deliver, the consortium will be in a unique position to create/provide/construct legacy catalogs, with exquisite imaging quality and superb Near Infrared Spectroscopy, with impact on may areas of galaxy science. This talk will review the prospects and scientific output that Euclid will be able to achieve in areas of Galaxy an AGN Evolution, the Local Universe, studies of the Milky Way and stellar populations, SNe and Transients, Solar System Objects, Planets, etc....
Baryon Acoustic Oscillations (BAO) are one of the most useful and used cosmological probes to measure cosmological distances independently of the underlying background cosmology. However, in the current measurements, the inference is done using a theoretical clustering correlation function template where the cosmological and the non-linear damping parameters are kept fixed to fiducial LCDM values. How can we then claim that the measured distances are model-independent and so useful to select cosmological models?
Motivated by this compelling question we introduce a rigorous tool to measure cosmological distances without assuming a specific background cosmology: the โPurely-Geometric-BAOโ. I will explain how to practically implement this tool with clustering data. This allows us to quantify the effects of some of the standard measurementsโ assumptions.
However, the inference is still plagued by the ambiguity of choosing a specific correlation function template to measure cosmological distances. We address this issue by introducing a new approach to the problem that leverages a novel BAO cosmological standard ruler: the โLinear Pointโ. Its standard ruler properties allow us to estimate cosmological distances without the need of modeling the poorly-known late-time nonlinear corrections to the linear correlation function. Last but not least, it also provides smaller statistical uncertainties with respect to the correlation function template fit. All these features make the Linear Point a promising candidate to properly measure cosmic distances with the upcoming Euclid galaxy survey.
The synergy between gravitational wave (GW) experiments and large galaxy surveys such as the Dark Energy Spectroscopic Instrument (DESI) is most prominent in the standard siren method, which has already enabled several measurements of the Hubble Constant. A standard siren analysis was performed using the only GW event with an electromagnetic counterpart, GW170817, for the first time. We have later extended the analysis to compact object binary merger events without counterpart using DESI galaxy catalogs, for which I will present the latest results. I will also present efforts and plans to follow-up gravitational wave events and IceCube high-energy neutrino events with DESI.
Dilatons (and moduli) couple to the masses and coupling constants of ordinary matter, and these quantities are fixed by the local value of the dilaton field. If, in addition, the dilaton with mass $m_\phi$ contributes to the cosmic dark matter density, then such quantities oscillate in time at the dilaton Compton frequency. We show how these oscillations lead to broadening and shifting of the Voigt profile of the Ly$\alpha$ forest, in a manner that is correlated with the local dark matter density. We further show how tomographic methods allow the effect to be reconstructed by observing the Ly$\alpha$ forest spectrum of distant quasars. We then simulate a large number of quasar lines of sight using the lognormal density field, and forecast the ability of future astronomical surveys to measure this effect. We find that in the ultra low mass range $10^{-32}\text{ eV}\leq m_\phi\leq 10^{-28}\text{ eV}$ upcoming observations can improve over existing limits to the dilaton electron mass and fine structure constant couplings set by fifth force searches by up to five orders of magnitude. Our projected limits apply assuming that the ultralight dilaton makes up a few percent of the dark matter density, consistent with upper limits set by the cosmic microwave background anisotropies.
The electron-positron stage of the Future Circular Collider (FCC-ee) is a frontier factory for Higgs, electroweak, QCD and flavour physics. It is designed to operate in a 100 km circular tunnel built at CERN, and will serve as the first step towards 100-TeV proton-proton collisions. In addition to an essential and unique Higgs program, FCC-ee offers powerful opportunities to discover direct or indirect evidence of physics beyond the Standard Model. Direct searches for long-lived particles at FCC-ee could be particularly fertile in the high-luminosity Z run, where $5\cdot 10^{12}$ Z bosons are anticipated to be produced for the configuration with two interaction points. The very large samples of Higgs bosons, W bosons and top quarks in very clean experimental conditions could offer additional opportunities at other collision energies. Three physics cases producing long-lived signatures at FCC-ee are highlighted and studied in this contribution: heavy neutral leptons (HNLs), axion-like particles (ALPs), and exotic decays of the Higgs boson. These searches motivate out-of-the-box optimization of experimental conditions and analysis techniques, that could lead to improvements in other physics searches.
We propose a two-stage strategy to search for new long-lived particles that could be produced at the CERN LHC, become trapped in detector material, and decay later. In the first stage, metal rods are exposed to LHC collisions in an experimental cavern. In the second stage, they are immersed in liquid argon at a different location, where out-of-time decays could be detected. Using a benchmark of pair-produced long-lived gluinos, we show that this experiment would have unique sensitivity to gluino-neutralino mass splittings down to 3 GeV, in previously uncovered lifetimes of days to years.
The proposed MATHUSLA experiment (MAssive Timing Hodoscope for Ultra-Stable neutraL pArticles) could open a new avenue for discovery of Physics Beyond the Standard Model at the LHC. The large-volume detector will be placed above the CMS experiment with O(100) m of rock separation from the LHC interaction point. It is instrumented with a tracking system to observe long-lived particle decays inside its empty volume. The experiment is composed of a modular array of detectors covering together (100 ร 100) m$^2$ ร 25 m high. It is planned in time for the high luminosity LHC runs. With a large detection area and good granularity tracking system, MATHUSLA is also an efficient cosmic-ray Extensive Air Shower (EAS) detector. With good timing, spatial and angular resolution, the several tracking layers allow precise cosmic-ray measurements up to the PeV scale that compliment other experiments.
We will describe the detector concept and layout, the status of the project, the on-going cosmic ray studies, as well as the future plans. We will focus on the current R&D on 2.5 m long extruded plastic scintillator bars readout by wavelength shifting fibers connected to Silicon Photomultipliers (SiPM) located at each end of the bar. We will discuss the studies made on possible fiber layout, dopant concentration, as well as report on the timing resolution measurements obtained using Saint Gobain and Kuraray fibers. We will also describe the tests made on the Hamamatsu and Broadcom SiPM, a possible SiPM cooling system using chillers, as well as highlight the structure of the trigger and data acquisition. Moreover, we will discuss the proposal of adding a 10$^4$ m$^2$ layer of RPCs with both digital and analogue readout to improve significantly cosmic ray studies in the 100 TeV โ 100 PeV energy range with a focus on large zenith angle EAS.
Many models beyond the standard model predict new particles with long lifetimes, such that the position of their decay is measurably displaced from their production vertex, and particles giving rise to other non-conventional signatures. We present recent results of searches for long-lived particles and other non-conventional signatures obtained using data recorded by the CMS experiment at Run-II of the LHC.
Various theories beyond the Standard Model predict new, long-lived particles with unique signatures which are difficult to reconstruct and for which estimating the background rates is also a challenge. Signatures from displaced and/or delayed decays anywhere from the inner detector to the muon spectrometer, as well as those of new particles with fractional or multiple values of the charge of the electron or high mass stable charged particles are all examples of experimentally demanding signatures. The talk will focus on the most recent results using 13 TeV pp collision data collected by the ATLAS detector.
The MoEDAL experiment deployed at IP8 on the LHC ring was the first dedicated search experiment to take data at the LHC in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, massive slowly moving charged particles and long-lived massive charge SUSY particles. We shall report on our search at LHCโs Run-2 for Magnetic monopoles and dyons produced in p-p and photon-fusion. We will report in a little more detail our most recent result in this arena: the search for magnetic monopoles via the Schwinger Mechanism in Pb-Pb collisions, that was recently published in Nature.
The MoEDAL detector will be reinstalled for LHCโs Run-3 to continue the search for electrically and magnetically charged HIPs. As part of this effort we will initiate the search for massive long-very lived SUSY particles to which MoEDAL has a competitive sensitivity. An upgrade to MoEDAL, the MoEDAL Apparatus for Penetrating Particles (MAPP), approved by CERNโs Research Board is now the LHCโs newest detector. The MAPP detector, positioned in UA83, expands the physics reach of MoEDAL to include sensitivity to feebly-charged particles with charge, or effective charge, as low as 10-3 e (where e is the electron charge). Also, the MAPP detector In conjunction with MoEDALโs trapping detector gives us a unique sensitivity to extremely long-lived charged particles. MAPP also has some sensitivity to long-lived neutral particles.
Additionally, we will very briefly report on the plans for the MAPP-2 upgrade to the MoEDAL-MAPP experiment for the High Luminosity LHC (HL-LHC). We envisage that this detector will be deployed in the UGC1 gallery near to IP8. This phase of the experiment is designed to maximize MoEDAL-MAPPโs sensitivity to very long-lived neutral messengers of physics beyond the Standard Model.
Book Launch and a conversation with the editors and some authors of the book chapters
Nearly all physics analyses at CMS rely on precise reconstruction of particles from their signatures in the experimentโs calorimeters. This requires both assignment of energy deposits to particles and recovery of various properties across the detector. These tasks have traditionally been performed by classical algorithms and BDT regressions, both of which rely on human-engineered high level quantities. However, bypassing human feature engineering and instead training deep learning algorithms on low-level signals has the potential to further recover lost information and improve the overall reconstruction. We have therefore developed novel algorithms for particle reconstruction in the CMS calorimeters based on graph neural networks, which allow us to represent the energy deposits recorded in the calorimeter directly in our models. In this work we will show the performance of our GNN architecture in energy reconstruction in the CMS ECAL, where we obtain improved energy resolutions and resilience to effects such as detector gaps, early showering upstream of the calorimeter, and pileup with respect to the previous state-of-the-art approach. This contribution will also cover how similar approaches can be applied to energy reconstruction in test beam data for the CMS High-Granularity Calorimeter (HGCAL), planned for operation in the HL-LHC. Furthermore, we will discuss new developments in graph architectures which allow for single-pass reconstruction of multiple particles in dense environments such as the CMS HGCAL.
We present MadFlow, a python-based software for the evaluation of cross sections utilizing hardware accelerators.
The pipeline includes a first stage where the analytic expressions for matrix elements are generated by the MG5_aMC@NLO framework (taking advantage of its great flexibility) and exported in a vectorized device-agonstic format using the TensorFlow library or a device specific CUDA output.
The simulation of the event is then performed using the VegasFlow and PDFFlow frameworks for the phase space integration and interpolation of PDFs and then deployed automatically to systems with different hardware acceleration capabilities (multi-threading CPU, single-GPU and multi-GPU setups from both Nvidia and AMD). We show results for Leading Order calculations with up to 5 legs in the final state offering an speed-up of orders of magnitude over traditional CPU-based calculations.
In recent years, compute performances of GPUs (Graphics Processing Units) dramatically increased, especially in comparison to those of CPUs (Central Processing Units). GPUs are nowadays the hardware of choice for scientific applications involving massive parallel operations, such as deep learning (DL) and Artificial Intelligence (AI) workflows. Large-scale computing infrastructures such as on-premises data centers, HPC (High Performance Computing) centers, and public or private clouds offer high performance GPUs to researchers. The programming paradigms for GPUs significantly vary according to the GPU model and vendor, often posing a barrier to their use in scientific applications. In addition, powerful GPUs are hardly saturated by typical computing applications. GPU partitioning may be the key to exploit GPU computing power in an efficient and affordable manner. Multiple vendors proposed custom solutions to allow for GPU partitioning, often with poor portability across different platforms and OSs (Operating Systems).
OpenForBC (Open For Better Computing) is an open source software framework that allows for effortless and unified partitioning of GPUs from different vendors in Linux KVM virtualized environments. OpenForBC supports dynamic partitioning for various configurations of the GPU, which can be used to optimize the utilization of GPU kernels from different users or different applications. For example training complex DL models may require a full GPU, but inference may only need a fraction of it, leaving free resources for multiple cloned instances or other tasks. In this contribution we describe the most common GPU partitioning options available on the market, discuss the implementation of the OpenForBC interface, and show the results of benchmark tests in typical use case scenarios.
For more than a decade the current generation of CPU-based matrix element generators has provided hard scattering events with excellent flexibility and good efficiency.
However, they are a bottleneck of current Monte Carlo event generator toolchains, and with the advent of the HL-LHC and more demanding precision requirements, faster matrix elements are needed, especially at intermediate to large jet multiplicities.
We present first results of the new BlockGen family of matrix element algorithms, featuring GPU support and novel colour treatments, and discuss the best choice to deliver the performance needed for the next generation of accelerated matrix element generators.
Extracting scientific results from high-energy collider data involves the comparison of data collected from the experiments with โsyntheticโ data produced from computationally-intensive simulations. Comparisons of experimental data and predictions from simulations increasingly utilize machine learning (ML) methods to try to overcome these computational challenges and enhance the data analysis. There is increasing awareness about challenges surrounding interpretability of ML models applied to data to explain these models and validate scientific conclusions based upon them. The matrix element (ME) method is a powerful technique for analysis of particle collider data that utilizes an ab initio calculation of the approximate probability density function for a collision event to be due to a physics process of interest. The ME method has several unique and desirable features, including (1) not requiring training data since it is an ab initio calculation of event probabilities, (2) incorporating all available kinematic information of a hypothesized process, including correlations, without the need for โfeature engineeringโ and (3) a clear physical interpretation in terms of transition probabilities within the framework of quantum field theory. In this talk, we present applications of machine learning that dramatically speeds-up ME method calculations and novel cyberinfrastructure developed to execute ME-based analyses on heterogeneous computing platforms.
COSINE-100 is a NaI-based dark matter detection experiment based at the Yangyang Underground Laboratory in South Korea. By searching for an annual modulation signal in NaI crystals, COSINE-100 aims to provide a model-independent test of the long-standing but contested positive dark matter signal from experiments by the DAMA collaboration using the same target material and search method. In this talk preliminary results from approximately five years of data taking at COSINE-100 will be presented, summarising various dark matter search efforts, with a focus on preliminary results from five years of annual modulation searches.
The SABRE project aims to produce ultra-low background NaI(Tl) scintillating detectors to carry out a model-independent search for dark matter through the annual modulation signature, with an unprecedented sensitivity to confirm or refute the DAMA/LIBRA claim. The ultimate goal of SABRE is to operate two independent NaI(Tl) crystal arrays located in the northern (SABRE North) and southern (SABRE South) hemispheres to identify possible contributions to the modulation from seasonal or site-related effects. SABRE North has carried out an extensive R&D on the production of ultra radio-pure NaI(Tl) crystals, as a large fraction of the background in (1- 6) keV energy region-of-interest (ROI) for Dark Matter search come from radioactive contaminants in the crystal themselves, most notably $^{40}$K, $^{87}$Rb, $^{210}$Pb, and $^{3}$H. A definitive test of the annual modulation claim has to be addressed by next-generation experiments using NaI(Tl) crystals with radio-purity similar to or below the DAMA level, such as in the proposed SABRE experiment. Direct counting of beta and gamma particles with the SABRE Proof-of-Principle detector, equipped with a liquid scintillator active veto and operated at the Gran Sasso National Laboratory (LNGS) has already demonstrated very low internal radioactivity for the so-called NaI-33 crystal. The amount of potassium contamination is found to be (2.2 ยฑ 1.5) ppb, lowest ever achieved for NaI(Tl) crystals. With the active veto, the average background rate in the crystal in the ROI is (1.20 ยฑ 0.05) counts/day/kg/keV, which is a breakthrough since the DAMA/LIBRA experiment. Our background model indicates that the rate is dominated by $^{210}$Pb and that about half of this contamination is located in the PTFE reflector. The liquid scintillator veto was initially proposed to effectively reduce the $^{40}$K background from a predicted contamination of natK at a level of (10-20) pbb. As presented here, data acquired for about one year with the NaI-33 detector into a purely passive shielding made of copper, polyethylene and water (PoP-dry setup), have shown that, if the crystal vetoable internal contaminations are the order of that of NaI-33, the active veto is no longer a crucial feature to achieve a background rate lower or comparable to that of DAMA/LIBRA.
We discuss ongoing developments of the crystal manufacture aimed at the further reduction of the background. These results represent a benchmark for the development of next-generation NaI(Tl) detector arrays for the direct detection of dark matter particles. A projected background rate of the order of 0.3 counts/day/kg/keV in the ROI is within reach. With this level of background it is possible to design a fully operational detector based on an array of ultra-high purity NaI(Tl) scintillating crystals with a total mass only a fraction of present generation detectors, yet surpassing the sensitivity so far achieved.
Today, the situation in direct dark matter detection is puzzling: the DAMA/LIBRA experiment observes an annual modulation signal at high statistical significance and fitting to the expectation of a cold dark matter halo in the milky way. However, in the so-called standard scenario on dark matter halo and dark matter interaction properties, the DAMA/LIBRA signal contradicts the null-results of numerous other experiments.
COSINUS aims for a model-independent cross-check of the DAMA/LIBRA signal. To be immune to potential dependencies on the target material, COSINUS will use NaI target crystals, the same material as DAMA/LIBRA. Several experimental efforts with NaI targets are planned or already ongoing. COSINUS is the only experiment operating NaI as a cryogenic detector, which yields several distinctive advantages: Discrimination between electronic interactions and nuclear recoils off sodium and iodine on an event-by-event basis, a lower nuclear recoil energy threshold, and a better energy resolution.
The construction of COSINUS started in December 2021 at the LNGS underground laboratory in central Italy. In this contribution, we will report on the current status of the construction and discuss in detail the cryogenic NaI detectors which use an innovative phonon readout, denoted remoTES.
Despite great efforts to directly detect dark matter (DM), experiments so far have found no evidence. The sensitivity of direct detection of DM approaches the so-called neutrino floor below which it is hard to disentangle the DM candidate from the background neutrino. One of the promising methods of overcoming this barrier is to utilize the directional signature that both neutrino- and dark-matter-induced recoils possess. The nuclear emulsion technology is the most promising technique with nanometric resolution to disentangle the DM signal from the neutrino background. The NEWSdm experiment, located in the Gran Sasso underground laboratory in Italy, is based on novel nuclear emulsion acting both as the Weakly Interactive Massive Particle (WIMP) target and as the nanometric-accuracy tracking device. This would provide a powerful method of confirming the Galactic origin of the dark matter, thanks to the cutting-edge technology developed to readout sub-nanometric trajectories. In this talk we discuss the experiment design, its physics potential, the performance achieved in test beam measurements and the near-future plans. After the submission of a Letter of Intent, a new facility for emulsion handling was constructed in the Gran Sasso underground laboratory which is now under commissioning. A Conceptual Design Report is in preparation and will be submitted in 2022.
The CYGNUS proto-collaboration aims to establish a Galactic Directional Recoil Observatory at the ton-scale that could test the DM hypothesis beyond the Neutrino Floor and measure the coherent and elastic scattering of neutrinos from the Sun and possibly Supernovae. A unique capability of CYGNUS will be the detailed measurement of topology and direction of low-energy nuclear and electron recoils in real time.ย Other key features ofย CYGNUS are modular, recoil sensitive TPCs (electron and/or negative ion drift operation) filled with a Helium-Florine based gas mixture at atmospheric pressure for sensitivity to low WIMP masses for both Spin Independent and Spin Dependent couplings. Installation in multiple underground sites (including the Southern Hemisphere), with a staged expansion, is foreseen to mitigate contingencies, minimise location systematics and improve sensitivity. Current and near-term,ย m^3-scale detectors can be used for precision studies of final state topology, such asย measurements of the Migdal effect, and searches forย beyond the Standard Model (BSM) physics at beam dumps and neutrino beams. Next generation, 10 m^3 detectors should allow measurements of CNO solarย neutrinos via coherent elasticย scattering, and produce improved limits on spin-dependent DM scattering. A ton-scale observatory would probe unexplored DM parameter space, including below the neutrino floor, and can be used to confirm the galactic origin of a dark matter signal. We will review the keyย features and expected physics reach of CYGNUS, and the programs currently underway in several laboratories to optimise gas mixture, technologies and algorithms towards the realisation of this concept.
The plans for LHCb upgrade II in the HL LHC era include complementing the experimentโs particle ID capabilities in the low momentum region up to 10-15 GeV with the novel TORCH time of flight detector. TORCH is designed to provide 15 ps timing resolution for charged particles, resulting in K/pi (p/K) particle identification up to 10 (15) GeV/c momentum over a 10 m flight path. Cherenkov photons, produced in a quartz plate of 10 mm thickness, are focused onto an array of micro-channel plate photomultipliers (MCP-PMTs) which measure the photon arrival times and spatial positions. We present the latest TORCH design for the LHCb upgrade II Framework Design Report, including a novel, computationally efficient TORCH pattern recognition algorithm, and the simulated particle ID performance in LHCb upgrade II high luminosity running conditions. As a proof of concept, a half-scale (660 x 1250 x 10 mm^3) TORCH demonstrator module, instrumented with customised MCP_PMTs, has been tested in a mixed proton-pion beam at the CERN PS. The MCP-PMTs with an active area of 53 x 53 mm^2 and a granularity of 64 x 8 pixels have been developed in collaboration with an industrial partner (Photek). We present a comprehensive analysis of the testbeam data, complemented by lab-based performance measurements of individual TORCH components. A fully instrumented TORCH prototype module is under construction.
At the latest European strategy update in 2020 it has been highlighted that the next highest-priority collider should be an $e^+e^-$ Higgs factory with a strong focus on precision physics. Particle identification will be an essential tool for such precision measurements to utilise clean event environment and push event reconstruction to its full potential. A recent development of the fast-timing Si sensors such as LGADs with a time resolution below 50 ps will allow to enhance precision measurements at the future Higgs factories with an additional separation of $\pi^{\pm}$, $K^{\pm}$, $p$ using time-of-flight technique. In this study we present our latest developments of the time-of-flight particle identification algorithm with a brief overview of its potential physics applications, discuss its realistic design implementations inside the future Higgs factory detector using International Large Detector (ILD) as an example and highlight a key role and importance of the fast-timing detectors for $\pi^{\pm}$, $K^{\pm}$, $p$ identification.
A 51-kiloton magnetised Iron Calorimeter (ICAL) detector, using Resistive Plate Chambers (RPCs) as active detector elements, aims to study atmospheric neutrinos. It will be the flagship experiment at the India-based Neutrino Observatory (INO) which is proposed to be housed in a cavern at the end of a 2 km tunnel in a mountain near Pottipuram (Tamil Nadu). A prototype - 1/600 of the weight of ICAL, called mini-ICAL was installed in the INO transit campus at Madurai, in order to gain experience in the construction of a large-scale electromagnet, and in order to study the detector performance and test the ICAL electronics in the presence of a fringe magnetic field. The 4 m ร 4 m ร 1.1 m mini-ICAL magnet with 11 iron layers, and 2 m ร 4 m ร 1.1 m active detector using 20 RPCs which is housed in the central region of the magnet, have been in operation for about 4 years and collecting cosmic muon data. A proof-of-principle cosmic muon veto detector (CMVD) of about 1 m ร 1 m ร 0.3 m dimensions was set up a few years ago, using scintillator paddles. The measured cosmic muon veto efficiency of ~99.98% and simulation studies of muon induced background events in the ICAL detector surrounded by an efficient veto detector were promising. This led to the idea of constructing a bigger cosmic muon veto around the mini-ICAL detector.
CMVD will comprise of Veto walls on three sides and the top of the mini-ICAL will be built using extruded scintillator strips (donated by Fermilab). The top layer (the roof) of mini-ICAL will have four layers of scintillator strips and standing Veto walls on three sides (left, right, posterior) will each have three layers of scintillator strips. The layers of each veto wall will be staggered (by 15 mm) so as to minimize the effect of inter-strip gaps. There will be no anterior veto wall, so as to allow for maintenance of the mini-ICAL detector. Strips of 4500-4700 mm in length, 50mm wide and 10 or 20 mm thick are used to construct the veto shield that aims at 99.99% efficiency to tag cosmic muons. Double clad WLS fibres ~1.4 mm in diameter (from Kuraray) are inserted into two extruded fibre holes along the length of the strip and separated by 25 mm to collect the light signal. Hamamatsu SiPMs of 2 mm x 2 mm active area will collect the light on both sides of the fibres. In total, 712 strips, 6.6 km of fibre and 2848 SiPMs will be used. All the four veto walls/stations are designed to be movable from their designed positions, thus enabling better service access to the mini-ICAL.
The SiPM signals are amplified using a trans-impedance stage of gain ~1200 ๏ and fed to the DRS4 sampler, operating at 1 GS/s. The sampling window is so chosen as to cover the entire SiPMโs signal profile, as also the trigger latency of mini-ICAL. On receiving the cosmic muon trigger from mini-ICAL, the sampled data is digitised. Either a zero-suppressed pulse profile data or an integrated signal charge data of all hit channels of CMVD will be transferred to the backend. The muon veto efficiency of the CMVD is computed by extrapolating the muon tracks recorded by the mini-ICAL onto the veto walls and matching them to the CMVD hits there. 72 FPGA-based DAQ boards, each hosting 40 trans-impedance amplifiers, five DRS4 and ADC chips each besides network interface are being developed. Customised SIPM bias supply units along with extensive configuration, control and calibration of the detector elements as well as electronics are also being designed.
Details of the design, fabrication, quality control and construction of the detector including the electronics, trigger and DAQ systems planned will be briefly presented.
Serendipitously discovered by the BATSE mission in the nineties, Terrestrial Gamma-ray Flashes (TGFs) represent the most intense and energetic natural emission of gamma rays form our planet. TGFs consist of sub-millisecond bursts of gamma rays (energy up to one hundred MeV) generated during powerful thunderstorms by lightenings (average ignition altitude of about 10 km) and are in general companions of several other counterparts (electron beams, neutrons, radio waves). The ideal observatory for TGF is therefore a fast detector, possibly with spectral abilities and orbiting around Earth in LEO (Low Earth Orbit). To date, the benchmark observatory is ASIM, an instrument flying onboard the International Space Station (ISS), however TGF science is being addressed by new instruments, few of theme orbiting in free flight around Earth: among these, LIGHT-1, a 3U Cubesat mission launched in December 21st, 2021 and deployed from the ISS on February 3rd, 2022. The LIGHT-1 payload consists of two similar instruments conceived to effectively detect TGF at few hundred nanoseconds timescale. The detection unit is composed of a scintillating crystal organised in four optically independent channels, read out by as many photosensors. The detection unit is surrounded by a segmented plastic scintillator layer that acts as an anti coincidence VETO for charged particles. The customised electronics embeds power supplies and detector readout, signal processing, detector controls and act as interface with the bus of the spacecraft. LIGHT-1 makes the use of two different scintillating crystals, namely low background Cerium Bromide and Lanthanum Bromo Chloride, and two different photo sensing technologies based on PhotoMultiplier Tubes (R11265-200 manufactured by Hamamatsu) and Silicon Photomultipliers (ASD-NUV1C-P manufactured by Advansid and S13361-6050AE-04 manufactured by Hamamatsu). Payload performance and detailed description will be provided, along with simulation and pre-flight diagnostic tests and calibration. The results of commissioning and preliminary flight data will be also presented.
Hyper-Kamiokande (HK) will be a next generation water Cherenkov detector capable of measuring neutrino interactions with unprecedented statistical precision. Discriminating candidate neutrino interactions from cosmic-ray muons and low-energy backgrounds is dependent upon constructing an effective Outer Detector (OD). The baseline design proposes deploying up to ten thousand 3-inch high-sensitivity photomultipliers each coupled to an acrylic wavelength shifting (WLS) square plate. Sophisticated optical measurements using a high-powered laser setup have improved on existing absorbance results and demonstrated a previously unknown artifact of Mie Scattering present in all candidate WLS samples. Results from a new test facility (Baby-K) designed to evaluate the light collection efficiency of all WLS plates via their response to cosmic muons in ultra-pure water will also be presented. This talk will provide an overview of the R&D effort ongoing in Oxford to optimise the OD photosensor design, including the latest water and air-based measurements of WLS samples in combination with detailed simulation studies carried out in GEANT4.
IceCube-Gen2 is a proposed high energy extension of IceCube that would expand the high energy neutrino sensitivity by an order of magnitude. IceCube, located at the South Pole, is the world's largest neutrinos telescope. The IceCube-Gen2 optical array has a planned instrumented volume of 7.9 km^3, 8 times larger than that of IceCube, and will deploy 9,600 modules in 120 new strings with 240 m spacing. To cover such a horizontally sparse array, the design goals include increasing the optical module photosensitivity a factor of 3 better than the IceCube Digital Optical Module (DOM) while the diameter of the modules needs to be 10% smaller to reduce the ice drilling cost.
Two candidates for the baseline design have total lengths of 540 mm and 444 mm containing 18 and 16 4-inch PMTs, respectively. Each glass pressure vessel will consist of two halves with hemispherical end caps and a roughly cylindrical central section with a diameter of 12 inch. GEANT4 simulations, combined with lab measurements, have confirmed that the new optical modules have a factor of 3 more photosensitive area compared to the IceCube DOM. In this talk, we will show the unique design and expected performance of the new optical module and review the current status of development.
Communicating the science and achievements of the ATLAS Experiment is a core objective of the ATLAS Collaboration. This talk will explore the range of communication strategies adopted in ATLAS communications, with particular focus on how these have been impacted by the COVID-19 pandemic. In particular, an overview of ATLASโ digital communication platforms will be given โ with focus on social media, YouTube and Virtual Visits โ and the effect on target audiences evaluated with best practices are shared.
In 2018, the CMS Collaboration decided to start releasing 500-1000 word popular science articles that describe the collaboration's scientific publications. This talk summarises the experience and impact of the CMS briefings effort.
Twenty years ago an ambitious and ground breaking project was born within the INFN community with the aim to popularize physics using a web portal.
In these years students and the lay public were engaged with the hottest topics of modern research in particle and nuclear physics, astroparticle and theoretical physics.
Since the beginning, โScienzaPerTuttiโ (*) has evolved in many different directions, becoming a reference point for High-School students and teachers having an average of 3000 entries every day.
The project encompasses a variety of multimedia products like didactic units, research materials, columns, infographics, interviews, book reviews, and podcasts. A particular feature of many of these activities is a constant call to action to involve our audience, providing us continuous insights to optimize contents and methodologies.
This contribution focuses on three of these activities. Firstly, we present โAsk to the expertโ a section in which people can write science related questions/curiosities and interact with specialists. Secondly, we illustrate the annual contest, arrived at its XVII edition, addressed to High-School students and being devoted, every year, to a different topic. In 2021, more than 70 groups participated in the competition, which was dedicated to scientific errors identified by students in films, songs, art, and literature. An analysis of the proposed works and outcomes are discussed here. Finally, we present โCharacters and challenges of Physicsโ, a scientific exhibition dedicated to nine female and male scientists who contributed, through their challenges, to deepening our knowledge of modern physics. This exhibition was showcased during the Rome Science Festival 2021.
An outlook on future initiatives concludes the presentation.
(*) https://scienzapertutti.infn.it
This seminar presents the ASIMOV Prize for scientific publishing, born in Italy in 2016.
The Prize aims to bring the young generations closer to scientific culture, through the critical reading of popular science books. The books are selected by a committee that includes scientists, professors, Ph.D. and Ph.D. students, writers, journalists and friends of culture, and most importantly, over 800 school teachers. Students are actively involved in the Prize, according to the best practices of public engagement: they read, review the books and vote for them, choosing the winner. The experience is quite successful: 14,000 students from 270 schools all over Italy participated in the last edition.
In this seminar, some crucial issues concerning the ASIMOV Prize are examined: the theme of "Two Cultures", STEM subjects, the so-called gender gap. The possibility of replicating this experience in other countries, as has been done in Brazil - with more than encouraging results - is discussed.
In April 1960, the late Prince Philip, husbandย of Queen Elizabeth II, piloted his Heron airplane to Geneva for an informalย visit to CERN. Having toured the laboratoryโs brandย new โ25โGeVโ Protonย Synchrotron, he turned to his host, president of the CERN Council Franรงois deย Rose, and asked: โWhat have you got in mind for the future? Havingย built thisย machine, what next?โ De Rose replied that this was a big problem for the field:ย โWe do not really know whether we are going to discover anything new by goingย beyond 25โGeV.โ Unbeknown to physicists at the time, the weak gaugeย bosons were lying beyond the energy of the PS, and would be foundย twoย decades later at its successor, the SPSโฆ
Itโs a story that is repeated inย elementary particle physics. While some colliders had clearer physics goalsย than others, every one of them has led to a step-change in ourย understanding of theย fundamental laws of the universe. The LHC was a pinnacle in this regard, thrustingย high-energy physics under the glare of the media and turning theย Higgs bosonย into a household name. How should the next major colliderย be โsoldโ?
Enthusiasm and consensus in the community are key factors. In theย early 1990s, with the Higgs in its sights, there was agreement that anย energy-frontier hadron collider was the right stepย forward. Today -- against aย backdrop of the LHCโs discovery of a light Higgs boson and no particles beyondย the Standard Model, and puzzles such as dark matter andย neutrino masses -- the fieldย finds itself at a crossroads. Several major colliders are on the menu, each withย its own physics capabilities, technology challenges, history and sociology.
Whether straight or circular, European orย Asian, the next big collider requires a fresh narrative if it is to inspire physicists, funding agencies and the wider world. The rosy picture ofย eager experimentalistsย uncovering new elementary particles and wispy-haired theorists picking up Nobelย prizes seems antiquated now that all the particles of the Standardย Model haveย been found. The global situation is also very different to when theย LHC was approved, and the technology and scale of the next collider more ambitious.
Drawing on the history and status of the field, the European strategy update and the CERN communicationย strategy,ย CERN Courierย editor Matthew Chalmers will explore how best to communicate the next collider.
The production of quarkonia in hadronic collisions provides a unique testing ground for understanding quantum chromodynamics (QCD) since it involves both the perturbative and non-perturbative regimes of this theory. As the quarkonia formation is not yet fully understood, a variety of new experimental data serve as new insights and help to constrain the models. Additionally to the inclusive J/$\psi$ production, the ALICE detector can experimentally separate prompt charmonia from those produced in decays of hadrons containing a b quark.
Also, new experimental observables like the angular correlation between J/$\psi$ and charged particles bring new insights to quarkonium production in hadronic collisions. Measurements of the azimuthal correlation structure of emitted particles in high multiplicity proton-proton (pp) collisions can reflect the medium response to the initial collision geometry.
In this contribution, we present new results of the inclusive, prompt and non-prompt J/$\psi$ production in pp collisions at $\sqrt{s} = 5.02 $ and 13 TeV. The angular correlation between J/$\psi$ and charged particles in pp collisions at $\sqrt{s} = 13$ TeV will also be shown. Finally, the elliptic flow ($v_{2}$) of J/$\psi$ in high multiplicity pp collisions at $\sqrt{s} = 13$ TeV will be presented.
The suppression of bottomonium states is closely related to the interaction with the QGP, supposedly created in heavy ion (AA) collisions. The different binding energies of bottomonium states provide a unique pattern of yield modification which is useful to study thermal properties of the QGP. Previous results from CMS have shown the evidence of sequential suppression for $\Upsilon$(1S), $\Upsilon$(2S), and $\Upsilon$(3S). However, the given statistics were limited to clearly identify the $\Upsilon$(3S) meson. In this talk, we present the new measurements of excited bottomonium states with improved analysis technique and high-statistics data, which enables us to observe the $\Upsilon$(3S) meson in AA collisions for the first time. The results are compared with various theoretical calculations and provide strong constraints to the dynamical models.
J/$\psi$ is an important probe to study the properties of the quark-gluon plasma (QGP) created in heavy-ion collisions. Measurements of J/$\psi$ yield suppression in Au+Au collisions at $\sqrt{s_\mathrm{NN}}$ = 200 GeV suggest that J/$\psi$ production in heavy-ion collisions is affected by the interplay of several effects, including dissociation and regeneration in the QGP, and cold nuclear matter effects. Studying the properties of the QGP via J/$\psi$ requires a good understanding of all these effects, which is very challenging and requires high precision measurements. All these effects are expected to strongly depend on collision energy and collision system. STAR collected large data samples of Au+Au collisions at $\sqrt{s_\mathrm{NN}}$ = 54.4 GeV in 2017 and isobaric collisions ($^{96}_{44}Ru$ + $^{96}_{44}Ru$ and $^{96}_{40}Zr$ + $^{96}_{40}Zr$) at $\sqrt{s_\mathrm{NN}}$ = 200 GeV in 2018. These datasets provide an unique opportunity to study collision energy and system size dependence of the J/$\psi$ production with good precision.
In this contribution, precision measurements of inclusive $J/\psi$ production via the $e^{+}e^{-}$ decay channel will be presented. The centrality and transverse momentum dependence of the nuclear modification factors in Au+Au collisions at $\sqrt{s_\mathrm{NN}}$ = 54.4 GeV, and $^{96}_{44}Ru$ + $^{96}_{44}Ru$ and $^{96}_{40}Zr$ + $^{96}_{40}Zr$ collisions at $\sqrt{s_\mathrm{NN}}$ = 200 GeV will be presented. These results will be compared to the similar measurements in Au+Au and Cu+Cu collisions at $\sqrt{s_\mathrm{NN}}$ = 200 GeV and physics implications will also be discussed.
Quarkonium production is a direct probe of deconfinement in heavy-ion collisions. For J/$\psi$, a bound state of $c\bar{c}$ quarks, the (re-)generation is found to be the dominant production mechanism at low transverse momentum ($p_{T}$) and in central collisions at the LHC energies.
In addition, the non-prompt component of J/$\psi$ production from b-hadron decays allows one to access the interaction of b-hadrons with the QGP down to low transverse momentum.
In this talk, the measurements of the J/$\psi$ nuclear modification factor $R_{AA}$, as a function of centrality and $p_{T}$ in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV will be shown. Prompt and non-prompt J/$\psi$ production measurements at midrapidity ($|y|$<0.9), as well as inclusive J/$\psi$ results at large rapidity (2.5 < $y$< 4 ), will be presented exploiting the whole data sample collected from Run 2. The prompt/non-prompt separation extends down to very low $p_{T}$ and its precision is improved significantly compared to the previous publications. Additionally, the measurements of inclusive, prompt and non-prompt J/$\psi$ in p-Pb collisions will be shown, and their consequences for the interpretation of nucleus-nucleus data will be discussed. All the results will be compared with model calculations.
Charm and bottom quark production is an important experimental observable that sheds light on the heavy quark interaction with the nuclear medium. With high statistics datasets, tracking and PID at very low transverse momentum, and excellent vertexing capabilities, LHCb performs precision measurements of a rich set of heavy flavor hadrons, including B mesons, open charm hadrons and charmonia. These capabilities allow for precise studies of strangeness enhancement, baryon enhancement, and charmonia suppression in various colliding systems from $pp$ to $p$Pb and PbPb. Furthermore, the production of the exotic $X$(3872) hadrons in $pp$ and $p$Pb collisions is also studied. The nuclear modification factor $R_{pA}$ for the four-quark state $X$(3872) is measured for the first time. We will present these results along with comparisons to theoretical calculations.
Recent experimental measurements display an enhanced production of charmed baryons in high-energy nucleus-nucleus collisions. Quite surprisingly the same is found in proton-proton collisions, in which the relative yields of charmed baryons do not agree with the expectations based on e+e- collisions and with the predictions of those QCD event generators in which the hadronization stage is tuned to reproduce this more elementary situation.
Medium modification of hadronization, via some mechanism of recombination with light thermal partons, has been known for long to be an essential ingredient to implement in transport calculations in order to describe experimental data of heavy-flavour production in nucleus-nucleus collisions. This is true both for the momentum and angular distributions of the final charmed/beauty hadrons and for their relative yields.
In this talk I will present the main features of a novel hadronization scheme we developed and implemented in our POWLANG transport setup, showing also our first results for the heavy-flavour particle ratios and flow coefficients in nucleus-nucleus collisions, in satisfactory agreement with recent experimental data. The model is based on the formation of color-singlet clusters via recombination of a charm quark with a light thermal antiquark or diquark (assumed to be present in the medium around the critical temperature) from the same fluid cell. If the cluster is sufficiently light it undergoes a two-body decay, if its invariant mass is larger it is treated as a Lund string and accordingly fragmented. The model has some nice features: modelling hadronization as a 2->N process allows exact four-momentum conservation; involving particles from the same fluid-cell it contains by construction space-momentum correlations; recombination with diquarks allows one to describe charmed-baryon production; at large pT it naturally approaches standard vacuum-like fragmentation.
Results referring to nucleus-nucleus collisions can be found in our recent publication 2202.08732 [hep-ph].
A consistent modelling of the proton-proton reference (both for minimum-bias and high-multiplicity collisions), with the assumption of the formation of a small short-lived QGP droplet, in medium heavy-quark transport and hadronization is currently under development and preliminary results will be shown, with the aim of providing a unified picture of heavy-flavour production in small and large systems
The next generation undersea neutrino telescopes of KM3NeT continue to grow on the bottom of the Mediterranean Sea and so does their potential to make exciting discoveries. The larger of the two detec-tors, KM3NeT/ARCA, is located 3.5 km underwater, 80 km off shore Portopalo di Capo Passero in Ita-ly. Its planned size of one cubic kilometre and unprecedented depth are both linked to its core physics goal: observation of cosmic neutrinos. The current detector configuration has already outgrown its pre-decessor ANTARES and taken over its role in multi-messenger follow-up studies of transient events. In this talk, an overview of the most recent results obtained with KM3NeT/ARCA is presented. Expected sensitivities for the complete detector are also shown.
We have studied the hierarchy sensitivity of Protvino to ORCA (P2O) experiment in standard three flavor oscillation and in the presence of NSI. As P2O has a baseline of 2595 km, it is expected that P2O should have better sensitivity to mass hierarchy and NSI compared to the DUNE experiment. Despite having higher appearance events in minimal P2O than DUNE, we noticed that it has less sensitivity to hierarchy than DUNE. The hierarchy sensitivity of P2O becomes equivalent to DUNE for $\delta_{CP}=195^{\circ}$ for a background reduction factor of 0.46 and appearance channel background systematic normalization of 4$\%$. We name this configuration as Optimized P2O in our work. We see that $\epsilon_{e\tau}$ ($\epsilon_{e\mu}$) sensitivity of optimized P2O is better (similar) than DUNE when both $\epsilon_{e\mu}$ and $\epsilon_{e\tau}$ are considered in the analysis. We find that the change in hierarchy sensitivity of P2O is more significant compared to DUNE in the presence of NSI. Further, the hierarchy sensitivity in the presence of NSI is higher (lower) than the standard three flavor case for $\delta_{\rm CP} = 270^\circ (90^\circ)$.
IceCubeโs discovery of astrophysical neutrinos, and subsequent characterization of their energy spectrum up to a few PeV, has provided a new window into the high-energy Universe. However, many opportunities for discovery remain; low sample sizes still plague measurements of astrophysical neutrinos above 1PeV, and flavor measurements are challenging due to the difficulty in differentiating tau events from other flavors. A series of next-generation experiments aim to provide a novel aperture into the under-explored component of the high-energy neutrino spectrum. Among them is TAMBO (Tau Air-Shower Mountain-Based Observatory), a proposed water-Cherenkov detector set on a cliff-edge in the high Peruvian Andes. Utilizing the unique geometry of the Colca valley, TAMBO is situated to produce a high-purity sample of 1โ100 PeV astrophysical tau neutrino events. This talk will discuss recent progress and highlight the prospects and challenges of astrophysical tau neutrino detection in the next generation of neutrino experiments.
NEXT-100 is a neutrinoless double beta decay experiment located at the Canfanc Underground Laboratory and is due to start commissioning in Summer 2022. The experiment employs a high-pressure gas time projection chamber consisting of 100 kg of enriched Xe-136 and is capable of achieving sub-percent energy resolution FWHM at the decay energy as well as background rejection through calorimetry and reconstruction of event topology. Excellent energy resolution is essential for the experiment to minimise the contamination of backgrounds in the signal region and can be realised through the high-gain, low-noise amplification of ionisation signals via electroluminescence (EL). This talk will review the physics goals of NEXT 100 and the status of construction of the TPC and sensor planes.
The NEXT (Neutrino Experiment with a Xenon TPC) collaboration aims at the sensitive search of the neutrino-less double beta decay ($\beta\beta0\nu$) of 136Xe at the Laboratorio Subterraneo de Canfranc (LSC). The observation of such a lepton-number-violation process would prove the Majorana nature of neutrinos, providing also handles for an eventual measurement of the neutrino absolute mass. A first large-scale prototype of a high-pressure gas-Xenon electroluminescent TPC, NEXT-White, was operated at the LSC from 2016 to 2021. This 5-kg radiopure detector demonstrated the outstanding performance of the NEXT technology in terms of the energy resolution (<1% FWHM at 2.6 MeV) and the topology-based background rejection. NEXT-White also measured the relevant backgrounds for the $\beta\beta0\nu$ search using both 136Xe-depleted and 136Xe-enriched xenon. In this talk, the measurement of the half-life of the two neutrino mode of the double beta decay ($\beta\beta2\nu$) will be presented. For this measurement, two novel techniques in the field have been used: 1) a Richardson-Lucy deconvolution to reconstruct the single and double electron tracks, boosting the background rejection, and 2) a direct subtraction of the backgrounds, measured with 136Xe-depleted data. These techniques allow for background-model-dependent and background-model-independent results, demonstrating the robustness of the $\beta\beta2\nu$ half-life measurement and the unique capabilities of NEXT.
The NEXT collaboration is pursuing a phased program to search for neutrinoless double beta decay (0nubb) of 136Xe using high pressure xenon gas time projection chambers. The power of electroluminescent xenon gas TPCs for 0nubb derives from their excellent energy resolution (<1%FWHM), and the topological classification of two electron events, unique among scalable 0nubb technologies. Xenon gas detectors also also offer a further opportunity: the plausible implementation of single barium daughter ion tagging, an approach that may reduce radiogenic and cosmogenic backgrounds by orders of magnitude and unlock sensitivities that extend beyond the inverted neutrino mass ordering. This talk will cover advances in the development of single ion barium tagging for high pressure xenon gas detectors and summarize R&D towards large scale future phases of the NEXT program.
PandaX-4T is a large-scale multi-purpose experiment currently taking data at China Jin Ping underground Laboratory. Besides dark matter direct detection, the detector can be used to detect double beta decay of Xe-136 and neutrinos from the Sun with 4T of natural xenon in the active volume. In this talk, we will present the status of PandaX-4Tโs current data taking, analysis effort to extend the region of interest beyond the traditional dark matter search, as well as the expected physics reach in neutrino physics.
The upgraded LHCb detector will take data at a five times higher instantaneous luminosity. In this talk we will cover the performance of the all-new tracking detectors and demonstrate how their improved granularity helps the LHCb Upgrade not only maintain but improve on the previous LHCb detector performance in many key areas. We also cover the particle identification performance for hadrons and leptons, and show first results from detector commissioning, alignment, and calibration with early 2022 data.
From 2022 the LHCb experiment will use a triggerless readout system collecting data at an event rate of 30 MHz and a data rate of 4 Terabytes/second. A software-only High Level Trigger will enable unprecedented flexibility for trigger selections. During the first stage (HLT1), track reconstruction and vertex fitting for charged particles enable a broad and efficient selection process to reduce the event rate to 1 MHz. Tracking and vertexing at 30 MHz represents a significant computing challenge, and LHCb utilizes the inherent parallelism of the triggering process to meet throughput requirements with GPUs. A close integration with the DAQ and event building allows for a particularly compact system, with the GPUs hosted in the same servers as the FPGA cards receiving the detector data, which reduces the network to a minimum. This architecture also inherently eliminates latency considerations, allowing GPUs to be used despite the very high required throughput. We review the software and hardware design of this system, reflect on the challenges of developing for heterogeneous architectures, discuss how it meets LHCbโs performance requirements, and show first commissioning results from LHC Run 3.
LHCb's second level trigger, deployed on a CPU server farm, not only selects events but performs an offline-quality alignment and calibration of the detector and uses this information to allow physics analysts to deploy essentially their full offline analysis level selections (including computing isolation, flavour tagging, etc) at the trigger level. This โreal time analysisโ concept has also allowed LHCb to fully unify its online and offline software codebases. We cover the design and performance of the system which will be deployed in Run 3, with particular attention to the software engineering aspects, particularly with respect to quality assurance and testing/limiting failure modes.
The Front-End Link eXchange (FELIX) system is a new ATLAS DAQ component designed to meet the evolving needs of detector readout into the High-Luminosity LHC era. FELIX acts as the interface between the data acquisition; detector and trigger timing and systems; and new or updated trigger and detector front-end electronics. FELIX also routes data between custom serial links from front-end electronics to data collection and processing components via a commodity switched network. FELIX is being installed and commissioned for a subset of ATLAS detectors for the upcoming LHC Run 3, and will be rolled out to cover all other detectors for the much more challenging environment of Run 4 and HL-LHC. This presentation covers the design of FELIX and its evolution for High-Luminosity LHC, plus commissioning activities ahead of Run 3 and early integration with Run 4 systems.
Radiative rare b-hadron decays are sensitive probes of New Physics through the study of branching fractions, angular observables, CP asymmetries and measurements of the polarisation of the photon emitted in the decay. The LHCb experiment is ideally suited for the analysis of these decays due to its high trigger efficiency, as well as excellent tracking and particle identification performance. Recent results from the LHCb experiment are presented and their interpretation is discussed.
Results are presented on LF(U)V tests through precise measurements of decays involving heavy mesons and leptons, which are compared to the standard model predictions. The measurements use 13 TeV pp collision data collected by the CMS experiment at the LHC.
The observation of Lepton-Flavour or Lepton-Number violation would be an incontrovertible sign for New Physics. The LHCb experiment is well suited to search for this phenomena in B-meson decays thanks to its large acceptance and trigger efficiency, as well as its excellent invariant mass resolution and particle identification capabilities. Recent results on searches for lepton-flavour and lepton-number violating decays from the LHCb experiment will be presented.
In this talk I will introduce a minimal extension of the Standard Model (SM) featuring two leptoquarks: a doublet with hypercharge 1/6 and a singlet with hypercharge 1/3. Such a particle content is well motivated by what I denote as flavoured-unified theories where families and forces are gauge interactions treated in the same footing. The presence of such a pair of leptoquarks induces radiative generation of neutrino masses at one-loop level without the need of introducing heavy right-handed states. Furthermore, the model's particle content can offer a simultaneous explanation for the current B-physics anomalies, whose significance is slowly but steadily increasing when compared to pure SM predictions, while keeping tightly constrained lepton flavour violation observables under control. I will discuss the close relation between B-physics anomalies and neutrino properties (masses and mixing angles) and whether such an economical framework can explain all at once, within experimental 2, or even 1 sigma, uncertainty bounds. Last but not least, I will also discuss whether lepton anomalous magnetic moments can be accommodated.
Decays of B mesons that proceed through electroweak penguin amplitudes attract significant attention due to a number of observed discrepancies between the standard-model predictions and the results. Belle II is expected to perform measurements on channels closely related to those exhibiting anomalies and that are uniquely available to Belle II. These include bโs(d)ฮฝฮฝยฏ and bโs(d)ฯ+ฯโ transitions. We present recent results on bโsโ+โโ and bโsฮฝฮฝยฏ transitions. In addition we show results related to radiative penguin transitions b->s gamma and b->d gamma.
We study the supersymmetric (SUSY) effects on $C_7(\mu_b)$ and $C'_7(\mu_b)$
which are the Wilson coefficients (WC) for $b \to s \gamma$ at b quark
mass scale $\mu_b$ and are closely related to radiative B meson decays.
The SUSY-loop contributions to $C_7(\mu_b)$ and $C'_7(\mu_b)$ are calculated
at leading order (LO) in the Minimal Supersymmetric Standard Model (MSSM) with
general quark flavor violation (QFV). For the first time we perform a systematic
MSSM parameter scan for the WCs $C_7(\mu_b)$ and $C'_7(\mu_b)$ respecting all
the relevant constraints, i.e. the theoretical constraints from vacuum stability
conditions and the experimental constraints, such as those from $K$- and
$B$-meson data and electroweak precision data, as well as limits on SUSY
particle masses and the 125 GeV Higgs boson data from LHC experiments.
From the parameter scan, we find the following:
(1) The MSSM contribution to Re($C_7(\mu_b)$) can be as large as $\sim \pm 0.05$
which could correspond to about 3 $\sigma$ significance of New Physics (NP)
signal in the future LHCb-Upgrade and Belle II experiments.
(2) The MSSM contribution to Re($C'_7(\mu_b)$) can be as large as $\sim -0.08$
which could correspond to about 4 $\sigma$ significance of NP signal
in the future LHCb-Upgrade and Belle II experiments.
(3) These large MSSM contributions to the WCs are mainly
due to (i) large scharm-stop mixing and large scharm/stop involved
trilinear couplings $T_{U23}$, $T_{U32}$ and $T_{U33}$, (ii) large
sstrange-sbottom mixing and large sstrange-sbottom involved
trilinear couplings $T_{D23}$, $T_{D32}$ and $T_{D33}$ and
(iii) large bottom Yukawa coupling $Y_b$ for large $\tan\beta$
and large top Yukawa coupling $Y_t$.
In case such large NP contributions to the WCs are really observed
in the future experiments at Belle II and LHCb-Upgrade, this could
be the imprint of QFV SUSY (the MSSM with general QFV) and would
encourage to perform further studies of the WCs $C'_7(\mu_b)$ and
$C_7^{MSSM}(\mu_b)$ at higher order (NLO/NNLO) level in this model.
Reference:
[1] Phys. Rev. D104 (2021) 7, 075025 [arXiv:2106.15228 [hep-ph]].
[2] "Imprint of SUSY in radiative B-meson decays", talk presented
at SUSY2021, Aug 2021,'Beijing'.
The LHCb experiment collected the world's largest sample of charmed hadrons during LHC Run 1 and Run 2. With this data set, LHCb is currently providing the world's most precise measurements of properties of charmed hadrons, as well as discovering many previously unobserved states. This talk reports on measurements of excited charm(-strange) mesons in amplitude analyses of beauty mesons decaying to open charm final states and latest results on promptly produced charmed baryons.
Conventional doubly heavy hadrons, including quarkonium, are good probe for the non-perturbative regime of QCD, thus important to improve our understanding of the strong interaction. The LHCb experiment is dedicated to heavy flavour physics. The large heavy hadron dataset and excellent performance of the detector make it an ideal laboratory for studies of doubly heavy hadrons. The talk presents a summary of recent results from LHCb.
The large data sample accumulated by the Belle experiment at KEKB asymmetric energy $e^{+}e^{-}$ collider provides an important opportunities to study charmonium(-like) and bottomonium(-like) states. We report new results on $X(3872)$ decays to $J/\psi\omega$ and $\pi^+\pi^-\pi^0$ final states, as well as other studies on charmonium. Belle data taken with an energy scan around the $\Upsilon(5S)$ peak are useful to study bottomonia: we report about the study on the $\Upsilon(5S) \to \Upsilon(1S) K^+ K^-$ channel. Other results from this data sample, including the study of $B^*$ mass and a new measurement of $B_s \to D X$ branching fraction, are covered in this talk.
The measurements of spectroscopy and decays of b-hadrons can provide invaluable experimental input to improve our knowledge of QCD. The LHCb experiment uniquely covers the b-hadron enriched kinematic region and has excellent performance in reconstructing and identifying b-hadron decays. It has been making leading efforts in such studies. In this talk, the latest results on b-hadron spectroscopy and decay studies will be presented.
LHCb has recorded the largest sample of charm hadrons during Run 1 and Run 2 of the LHC (2011--2018). With these data, amplitude analyses of samples of unprecedented size are possible. Recent results of charm hadrons decays are shown.
The idea of diquarks as effective degrees of freedom in QCD has been a successful concept in explaining observed hadron spectra. Recently they have also played an important role in studying doubly heavy tetraquarks in phenomenology and on the lattice. The first member of this family of hadrons is the $T_{CC}$, newly discovered at LHCb.
Despite their importance, the colored nature of diquarks has been an obstacle in lattice studies. We address this issue by studying diquarks on the lattice in the background of a heavy static quark, i.e. in a gauge-invariant formalism with quark masses down to almost physical pion masses in full QCD. We determine mass differences between diquark channels as well as diquark-quark mass differences. In particular we consider diquarks with "good" scalar, $\bar{3}_F$, $\bar{3}_c$, $J^P=0^+$, quantum numbers. Attractive quark-quark spatial correlations are found only in this channel and we observe that the "good" diquark shape is spherical. From the spatial correlations in the "good" diquark channel we extract a diquark size of $\sim 0.6~\rm{fm}$.
Our results provide quantitative support for modelling the low-lying baryon spectrum using good light diquark effective degrees of freedom.
Primordial Gravitational Waves (GWs) are a unique tool to explore the physics and the microphysics of the early Universe. After the GW detections by the LIGO/Virgo collaboration the next target of modern cosmology is the detection of the stochastic background of GWs. Even if the main probe of primordial GWs is the Cosmic Microwave Background, we will see in this talk how we can extract information about primordial GWs at smaller scale. In particular, the space-based LISA interferometer, in addiction to detection and characterization of GWs of astrophysical origin, will give compelling information about the cosmological background of GWs. I will summarise part of the activity developed within the LISA Cosmology working group, and, in particular, I will discuss on the ability of LISA to test primordial well-motivated sources of GWs and the sensitivity to peculiar features of the SGWB, like anisotropy and chirality.
In $\sim 2034$ the Laser Interferometer Space Antenna (LISA) will detect the coalescence of massive black hole binaries (MBHBs) from $10^5$ to $10^7$ $\rm M_{\odot}$ up to $z\sim 10$. The gravitational wave (GWs) signal is expected to be accompanied by a powerful electromagnetic (EM) counterpart, from radio to X-ray, generated by the gas accreting on the binary.
If LISA locates the MBHB merger within an error box $<10 \, \rm deg^2$, EM telescopes can be pointed in the same portion of the sky to detect the emission from the last stages of the MBHB orbits or the very onset of the nuclear activity, paving the way to test the nature of gas in a rapidly changing space-time. Moreover, an EM counterpart will allow independent measurements of the source redshift which, combined with the luminosity distance estimate from the GW signal, will lead to exquisite tests on the expansion of the Universe as well as on the velocity propagation of GWs.
In this talk, I present some recent results on the standard sirens rates detectable jointly by LISA and EM facilities. We combine state-of-the-art models for the galaxy formation and evolution, realistic modeling of the EM counterpart and Bayesian tools to perform the parameter estimation of the GW event as well as of the cosmological parameters.
We explore three different astrophysical scenarios employing different seed formation (light or heavy seeds) and delay-time models, in order to have realistic predictions on the expected number of events. We estimate the detectability of the sources in terms of its signal-to-noise ratio in LISA and perform parameter estimation, focusing especially on the sky localization of the source. Exploiting the additional information from the astrophysical models, such as the amount of accreted gas and BH spins, we model the expected EM counterpart to the GW signal in soft X-ray, optical and radio.
In our standard scenario, we predict $\sim 14$ standard sirens with detectable counterparts over 4 yr of LISA time mission and $\sim 6$ ($\sim 20$) in the pessimistic (optimistic) one.
We also explore the impact of absorption from the surrounding gas both for optical and X-ray emission: assuming typical hydrogen and metal column density distribution, we estimate only $\sim 3$ stsi in 4 yr in the standard scenario.
Finally we combined the redshift and luminosity distance information to estimate cosmological parameters: we find that $\rm H_0$ can be constrained to ~few percent precision thanks to few sources whose redshift is measured with $\Delta z< 1\%$.
Observational constraints and prospects for detection of features, i.e. physically motivated oscillations in the primordial power spectrum, have so far concentrated on the CMB and Large Scale Structure surveys. Probing these features could, for instance, establish the existence of heavy particles beyond the reach of terrestrial experiments, and even test the inflationary paradigm or point to alternatives to it.
In this talk, I will discuss the ongoing effort to assess the detection prospects of such features with LISA, the upcoming space-based gravitational wave observatory.
Primordial black holes constitute an attractive dark matter candidate. I will discuss several new observational signatures for primordial black holes spanning orders of magnitude in mass, connecting them to gravitational wave and multi-messenger astronomy as well as long-standing astrophysical puzzles such as the origin of heavy elements.
Primordial Black Holes are hypothetical Black Holes formed in the very early universe and are potential Dark Matter Candidates. Focusing on the Primordial Black Holes mass range $[5\cdot10^{14}-1\cdot10^{17}]$ g, we point out that their evaporation can produce detectable signals in existing experiments. First of all, we study neutrinos emitted by PBHs evaporation. They can interact through the coherent elastic neutrino-nucleus scattering producing an observable signal in multi-ton DM direct detection experiments. We show that using future experiments with higher exposure it will be possible to constraints the fraction of Dark Matter composed by Primordial Black Holes. Furthermore, we study the emission of a light Dark matter candidate endowed with large kinetic energies. Focusing on the XENON1T experiment, we show that these relativistic dark matter particles could give rise to signal orders of magnitude larger than the present upper bounds. The non-observation of such a signal can be used to constraints the combined parameter space of primordial black holes and sub-GeV dark matter.
The direct detection of gravitational waves opened an unprecedented channel to probe fundamental physics. Several alternative theories of gravitation have been proposed with various motivations, including accounting for the accelerated expansion of the Universe and the unification of fundamental forces. The study of gravitational waves propagation enables to put several predictions from those proposed theories to test, with the advantages of presenting deviations that are source-independent and tractable over the complete waveform signal. This talk present an overview of the recent searches for anomalous propagation effects using the events detecting by the LIGO-Virgo-KAGRA collaboration during the three observational runs. Several proposals, such as massive gravity and unified theories, predict a frequency-dependent dispersion of the gravitational waves breaking local CPT and/or Lorentz symmetry. Constraints on the dispersion coefficients are obtained from the analysis of the gravitational waveform signals using an effective field theory framework. Using inferred wave and source properties from candidate multimessenger events, constraints are independently obtained on the speed of gravity, the presence of large extra dimensions and scalar-tensor gravitation theories parameterisations.
Axion-like particles (ALPs) are at the forefront of physics research, especially at the intensity frontier, dealing with light weakly coupled particles. A plethora of different experiments searches for signals of the ALP in many different final states using innovative search strategies. We present a different perspective on ALP searches, concentrating on the modifications that such a particle causes to the known Standard Model (SM) results. The presence of a low lying ALP modifies the SM in non-trivial ways. We systematically derive the leading order chiral lagrangian in the presence of an ALP (A๐PT). Then, using the derived A๐PT, we systematically discuss three distinct modifications to SM physics---which arise at the tree level itself: i) those to the meson mass spectrum, ii) those to hadronic form factors, leading to modified to partial decay rate distributions of the mesons, and iii) those to the sum rules constructed out of meson decay amplitudes. As a proof of concept example of our program, we analyse semi-leptonic Kaon decay data collected by the NA48/2 collaboration to find bounds on the ALP parameter space.
Many extensions of the Standard Model include the possibility of light new particles, such as axions candidates. These scenarios can be probed using the large data sets collected by $B$-factories, complementing measurements performed at the LHC. We report on a search for an Axion-like particle (ALP), $a$, produced in the Flavor-Changing Neutral-Current decay $B\to K a$, with $a\to \gamma\gamma$, which is expected to be competitive with the corresponding Standard-Model electroweak processes. This search, performed by using a dataset of about 470 million $B\bar{B}$ pairs collected by the $BABAR$ experiment at the PEP-II $e^+e^-$ collider, is sensitive to ALP masses in the range 0 - 4.78 GeV.
Coherent CAPTAIN-Mills (CCM) is a 10 ton liquid argon scintillation detector located at Los Alamos National Lab. The prototype detector CCM120 was fabricated in 2017, which utilized 120 PMTs, and now the upgraded detector CCM200, with 200 PMTs, has collected data in the 2021 run cycle. The physics program of CCM comprises searches for new particles in the weak sector, including Dark Photons, Axion-like Particles (ALPs), and neutral heavy leptons in the keV to MeV mass range, extending the coverage of open parameter space for these searches at the order of magnitude level.
It is well known that the Sun represents an efficient and intense source of axions. We aim to study such axions and analyze their properties using the terrestrial neutrino oscillation experiment JUNO. We consider the Compton conversion, axions decay to the photon, and inverse Primakoff conversion processes in order to analyze the axion detection signatures. In this talk, will be presented a detailed analysis to constrain the axion-electron ($ g_{Ae} $), axion-photon ($ g_{A\gamma} $), and axion-nucleon ($ g_{3AN} $) couplings using JUNO data for axion mass $ m_a < 1 $ MeV. For comparison, we will also show bounds arising from other laboratory-based experiments.
The proposed LUXE experiment (LASER Und XFEL Experiment) at DESY, Hamburg, using the electron beam from the European XFEL, aims to probe QED in the non-perturbative regime created in collisions between high-intensity laser pulses and high-energy electron or photon beams. This setup also provides a unique opportunity to probe physics beyond the standard model. In this talk we show that by leveraging the large photon flux generated at LUXE, one can probe axion-like-particles (ALPs) up to a mass of 350 MeV and with photon coupling of $3 \times 10^{-6}$ GeV$^{-1}$. This reach is comparable to the background-free projection from NA62. In addition, we will discuss other probes of new physics such as ALPs-electron coupling.
Although the LHC experiments have searched for and excluded many proposed
new particles up to masses close to 1 TeV, there are many scenarios that
are difficult to address at a hadron collider. This talk will review a
number of these scenarios and present the expectations for searches at an
electron-positron collider such as the International Linear Collider.
The cases discussed include SUSY in strongly or moderately compressed
models, heavy neutrinos, heavy vector bosons coupling to the s-channel
in e+e- annihilation, and new scalars.
We study the possibility of measuring neutrino Yukawa couplings in the Next-to-
Minimal Supersymmetric Standard Model with right-handed neutrinos (NMSSMr) when the lightest right-sneutrino is the Dark Matter (DM) candidate, by exploiting a โdijet + dilepton + Missing Transverse Energyโ (MET) signature. We show that, contrary to the minimal realisation of Supersymmetry (SUSY), the MSSM, wherein the DM candidate is typically a much heavier (fermionic) neutralino state, this extended model of SUSY offers one with a much lighter (bosonic) state as DM that can then be produced at the next generation of e+eโ colliders with energies up to 500 GeV or so. The ensuing signal, emerging from chargino pair production and subsequent decay, is extremely pure so it also affords one with the possibility of extracting the Yukawa parameters of the (s)neutrino sector. Altogether, our results serve the purpose of motivating searches for light DM signals at such machines, where the DM candidate can have a mass around the Electro-Weak (EW) scale.
Machine Learning algorithms are playing a fundamental role in solving High Energy Physics tasks. In particular, the classification of hadronic jets at the Large Hadron Collider is suited for such types of algorithms, and despite the great effort that has been put in place to tackle such a classification task, there is room for improvement. In this context, Quantum Machine Learning is a new methodology that takes advantage of the intrinsic properties of quantum computation (e.g. entanglement between qubits) to possibly improve the performance of a classification task. In this contribution, a new study of Quantum Machine Learning applied to jet identification is presented. Namely, a Variational Quantum Classifier is trained and evaluated on fully simulated data of the LHCb experiment, in order to identify jets containing a hadron formed by a $b$ or $\bar{b}$ quark at the moment of production. The jet identification performance of the quantum classifier is compared with a Deep Neural Network using the same input features.
Clustering is one of the most frequent problems in many domains, in particular, in particle physics where jet reconstruction is central in experimental analyses. Jet clustering at the CERN's Large Hadron Collider is computationally expensive and the difficulty of this task is expected to increase with the upcoming High-Luminosity LHC (HL-LHC).
In this work, we study the case in which quantum computing algorithms might improve jet clustering by considering two novel quantum algorithms which may speed up the classical jet clustering algorithms. The first one is a quantum subroutine to compute a Minkowski-based distance between two data points, whereas the second one consists of a quantum circuit to track the maximum into a list of unsorted data. The latter algorithm could be of value beyond particle physics, for instance in statistics. When one or both of these algorithms are implemented into the classical versions of well-known clustering algorithms (K-means, Affinity Propagation and $k_T$-jet) we obtain efficiencies comparable to those of their classical counterparts. Even more, we achieve an exponential speed-up in data dimensionality, when the distance algorithm is applied and an exponential speed-up in data length, when the maximum is selected through the quantum routine.
We present Qibo, a new open-source framework for fast evaluation of quantum circuits and adiabatic evolution which takes full advantage of hardware accelerators, quantum hardware calibration and control, and large codebase of algorithms for applications in HEP and beyond. The growing interest in quantum computing and the recent developments of quantum hardware devices motivates the development of new advanced computational tools focused on performance and usage simplicity. In this work we introduce a new quantum simulation and control framework that enables developers to delegate all complicated aspects of hardware or platform implementation to the library so they can focus on the problem and quantum algorithms at hand. As example for HEP applications, we show how to use Qibo for the determination of parton distribution functions (PDFs) using DIS, fixed-target DY and LHC data, and the construction of generative models applied to Monte Carlo simulation. We conclude by providing an overview of the variational quantum circuit models included in Qibo.
Machine-Learned Likelihood (MLL) is a method that combines the power of current machine-learning techniques to face high-dimensional data with the likelihood-based inference tests used in traditional analyses. MLL allows estimating the experimental sensitivity in terms of the statistical signal significance through a single parameter of interest, the signal strength. Here we extend the MLL method by including in it the exclusion hypothesis tests and apply it to case studies of interest in the search for new physics at the LHC, comparing the MLL performance to estimate exclusion limits with respect to experimental analyses of ATLAS and CMS and previous phenomenological studies.
Multivariate techniques and machine learning models have found numerous applications in High Energy Physics (HEP) research over many years. In recent times, AI models based on deep neural networks are becoming increasingly popular for many of these applications. However, neural networks are regarded as black boxes- because of their high degree of complexity it is often quite difficult to quantitatively explain the output of a neural network by establishing a tractable input-output relationship and information propagation through the deep network layers. As explainable AI (xAI) methods are becoming more popular in recent years, we explore interpretability of AI models by examining an Interaction Network (IN) model designed to identify boosted $H \rightarrow b\bar{b}$ jets amid QCD background. We explore different quantitative methods to demonstrate how the classifier network makes its decision based on the inputs and how this information can be harnessed to reoptimize the model- making it simpler yet equally effective. We additionally illustrate the activity of hidden layers within the IN model as Neural Activation Pattern (NAP) diagrams. Our experiments suggest NAP diagrams reveal important information about how information is conveyed across the hidden layers of deep model. These insights can be useful to effective model reoptimization and hyperparameter tuning.
The ATLAS experiment extensively uses multi-process (MP) parallelism to maximize data-throughput especially in I/O intensive workflows, such as the production of Derived Analysis Object Data (DAOD). In this mode, worker processes are spawned at the end of job initialization, thereby sharing memory allocated thus far. Each worker then loops over a unique set of events and produces its own output file, which in the original implementation needed to be merged at a subsequent step that would be executed serially. In Run 2, SharedWriter was introduced to perform this task on-the-fly, with an additional process merging data from the workers while the job was running, eliminating the need for the extra merging step. Although this approach had been very successful, there was room for improvements, most notably in the event-throughput scaling as a function of the number of workers. This was limited by the fact that the Run 2 version does all data compression within the SharedWriter process. For Run 3, a new version of SharedWriter has been written to address the limitations of the original implementation by moving compression of data to the worker processes. This development also paves the way for using it in a hybrid mode of multi-thread (MT) and MP workflows to maximize the I/O efficiency. In this talk, we will discuss the latest developments in Shared I/O in the ATLAS experiment.
In the context of the LHCb upgrade for LHC Run 3, the experiment software builds and release infrastructure are being improved. In particular, we present the LHCb nightly builds pipelines which are modernized to provide a faster turnaround of the produced builds. The revamped system organizes tasks of checkouts of the sources, builds and tests of the projects in LHCb software stacks on multiple architectures in a directed acyclic graph of dependencies, with the artifacts of each task cached and reused whenever possible, and distributes the jobs to the workers in the build farm. This work describes the implementation of the new system based on the tools such as Python, Luigi, Celery, CouchDB, RabbitMQ, OpenSearch, S3.
DEAP-3600 is a WIMP dark matter direct-detection experiment located 2 km underground at SNOLAB near Sudbury, Ontario in Canada, which uses liquid argon as the target material. The detector consists of 3.3 tonnes of liquid argon in a large acrylic cryostat instrumented with 255 photomultiplier tubes. This experiment has set the most stringent limits in argon for WIMP-nucleon spin-independent cross-sections. A study was also performed using a non-relativistic effective field theory to consider other dark matter-nucleon interactions. The research includes some specific interactions and isospin-violating scenarios, where world-leading limits were achieved for some model parameters. This study also analyzed the modification of the exclusion limits due to potential substructures in the local dark matter halo, motivated by the observations of stellar distributions from astronomical surveys. The physics program and the latest results of DEAP-3600 will be presented in this talk.
DarkSide run since mid-2015 a 50-kg-active-mass dual-phase Liquid Argon Time Projection Chamber (TPC), filled with low radioactivity argon from an underground source and produced world-class results for both the low mass ($M_{WIMP}< 20 GeV/c^2$) and high mass ($M_{WIMP} > 100 GeV/c^2$) direct detection search for dark matter.
The next stage of the DarkSide program will be a new generation experiment involving a global collaboration from all the current Argon based experiments. DarkSide-20k is designed as a 20-tonne fiducial mass dual-phase Liquid Argon TPC with SiPM based cryogenic photosensors and is expected to be free of any instrumental background for exposure of >100 tonne x year. Like its predecessor, DarkSide-20k will be housed at the INFN Gran Sasso (LNGS) underground laboratory, and it is expected to attain a WIMP-nucleon cross-section exclusion sensitivity of $7.4\times 10^{-48}\, cm^2$ for a WIMP mass of $1 TeV/c^2$ in a 200 t yr run. DarkSide-20k will be installed inside a membrane cryostat containing more than 700 t of liquid Argon and be surrounded by an active neutron veto based on a Gd-loaded acrylic shell. The talk will give the latest updates on the ongoing R&D and prototype tests validating the initial design.
XENONnT is a dark matter direct detection experiment located at the INFN Laboratori Nazionali del Gran Sasso. The core detector is a dual-phase time projection chamber (TPC) filled with 5.9 t of liquid xenon and instrumented with a total of 494 photomultiplier tubes (PMTs).
The TPC is installed in the center of a stainless steel tank filled with 700 t of water, which provides effective passive shielding but it is also instrumented with 84 PMTs and operated as an active water Cherenkov Muon Veto. A novel sub-detector, called Neutron Veto, is contained within the Muon Veto and surrounds the TPC in order to suppress the neutron background. The highly-reflective Neutron Veto volume is optically separated from the Muon Veto and instrumented with 120 high-QE low-radioactivity PMTs. The water will be eventually doped with gadolinium to maximize the neutron detection efficiency.
In 2020 XENONnT replaced the successful XENON1T experiment, which was the world's most sensitive detector for direct dark matter searches.
After a few months of commissioning, since mid 2021 XENONnT started its science data acquisition.
In this talk we will review the most relevant achievements of XENON1T, and describe the concept, the performances and the scientific program of the XENONnT experiment.
LUX-ZEPLIN (LZ) is a direct detection dark matter experiment located at the Sanford Underground Research Facility in Lead, South Dakota. The experiment consists of three nested detectors; a dual phase xenon TPC, an actively instrumented liquid xenon skin, and an outer detector neutron veto formed by 10 acrylic tanks of gadolinium-loaded liquid scintillator. The active region of the xenon TPC contains 7 tonnes of liquid xenon with a 5.6 tonne fiducial volume, allowing us to reach a WIMP-nucleon spin-independent cross section sensitivity of 1.4 x 10^-48 cm^2 for a 40 GeV/c^2 mass in 1000 live days. This talk will provide an overview of the LZ experiment and report on its status.
PandaX experiment uses xenon as target to detect weak and rare physics signals, including dark matter and neutrinos. We are running a new generation detector with 4-ton xenon in the sensitive volume, PandaX-4T. The commissioning run data has pushed the constraints on WIMP-nucleon scattering cross section to a new level. This talk will give an overview of PandaX-4T experiment and data-taking. New results on several other interesting dark matter models will be also reported in this talk.
this talk will review the main scientific goal of the DARWIN experiment: the 40 ton dual-phase Xenon TPC for WIMP dark matter search. Dark matter experiments with target masses beyond the ton scale are already reality: the XENONnT detector is currently taking its first science run data. In case of a positive dark matter detection in this detector a larger instrument will be required in order to study the properties of the dark matter particle. If nothing is found, it will be needed as well in order to fully explore the predicted parameter space for WIMP dark matter, reaching a spin-independent WIMP-nucleon scattering cross sections of a few 10$^{-49}$ cm$^{2}$, where coherent neutrino interactions with atomic nuclei become the dominating and irreducible background. This talk will also discuss the other important science channels that can be explored by DARWIN, including solar neutrinos, axion and axion-like particles, supernova neutrinos and neutrinoless double-beta decay of $^{136}$Xe.
The nEXO experiment is a proposed next-generation liquid xenon experiment to search for neutrino-less double beta decay ($0\nu\beta\beta$) of $^{136}$Xe. The experiment will use a 5-tonne liquid xenon monolithic time projection chamber enriched to 90% $^{136}$Xe. Ionization electrons and scintillation photons from energy deposits in the detector will recorded by a segmented anode and a large area SiPM array. This talk will present recent progress in the detector design, an improved modelling of signal readout and the development of a deep neural network based data analysis architecture to improve signal/background separation. These developments result in a 90% CL $0\nu\beta\beta$ halflife sensitivity of 1.35$\times$10$^{28}$ yrs in 10 years of data taking.
The DUNE experiment is a future long-baseline neutrino oscillation experiment aiming at measuring the neutrino CP violation and establishing the neutrino mass hierarchy, as well as at a rich physics programme from supernovae over low-energy physics to beyond standard model searches.
The baseline technology for the first far detector is a proven single-phase horizontal drift liquid Argon TPC based on standard wire-chamber technology.
For the second far detector, a new technology, the so-called "vertical drift" TPC is currently being developed: It aims at combining the strengths of the two technologies tested in the ProtoDUNE cryostats at the CERN neutrino platform into a single design, a vertical-drift single-phase liquid Argon TPC using a novel perforated-PCB anode design. This design maintains excellent tracking and calorimetry performance while significantly simplifying the complexity of the TPC construction.
This talk will introduce the concept of the vertical drift TPC, present first results from small-scale prototypes and a first full-scale anode module, as well as outlining the plans for future prototypes and the next steps towards the full second DUNE far detector.
The ability of identifying and discriminating electronic and nuclear recoil events at the experimental low energy threshold represents the main limitation of the modern dark matter direct detection experiments. In this context, the gaseous Time Projection Chambers (TPCs) with optical readout are a promising and innovative technique. Thanks to the high granularity and sensitivity of the latest generation of sCMOS light sensors, this approach is characterized by very good energy and 3D position reconstruction capabilities. The Cygno experiment is developing a gaseous TPC operated with a ${\rm He}$:${\rm CF_4}$ gas mixture at atmospheric pressure equipped with a Gas Electron Multipliers (GEM) amplification stage that produces visible light collected by a scientific CMOS camera, and by a set of fast photosensors.
In this contribution we will present the 50 L prototype, called Long Imaging ModulE (LIME), foreseen to conclude the R\&D phase of the Cygno project.
LIME has been recently installed underground at the Laboratori Nazionali del Gran Sasso (LNGS), with the aim of studying the performance of the Cygno experimental approach in a low background environment and developing a refined trigger and DAQ system for the future upgrades.
This is a crucial step towards the development of a larger $\mathcal{O}(1 {\rm m^3})$ demonstrator, which will be made up of several LIME modules.
FLArE is a Liquid Argon Time Projection Chamber (LArTPC) based experiment designed to detect very high-energy neutrinos and search for dark matter at the Large Hadron Collider at CERN. It will be located in the proposed Forward Physics Facility, 620 m from the ATLAS interaction point in the far-forward direction, and will collect data during the High-Luminosity LHC era. With a fiducial mass of 10 tonnes, FLArE will detect millions of neutrinos at the highest energies ever detected from a human source and will also search for Dark Matter particles with world-leading sensitivity in the MeV to GeV mass range. The LArTPC technology used in FLArE is well-studied for neutrino and dark matter experiments. It offers an excellent spatial resolution and allows excellent identification of individual particles. In this talk, I will overview the physics reach, preliminary design, and status of the FLArE project.
The history of neutrino physics has been profoundly marked by the use of transparent liquid scintillator (LS) detectors. Their application in reactor and solar neutrino physics led to the discovery and the study of many fundamental properties of the elusive neutrinos. Despite all these successes and many decades of R&D, particle identification (PID) remains a weak point for this technology. In this talk I will introduce a revolutionary detector concept called LiquidO that abandons the paradigm of transparent LS and exploits an opaque LS, i.e. one with a short light scattering length but moderate long absorption length, to confine the light near its creation point. In this way, the topological information of the particle interaction pattern, normally lost in transparent LS detectors, is revealed. This allows for efficient particle identification and event-by-event topological discrimination power down to MeV-scale positron, electron, and gamma events. LiquidO technology uses a dense grid of wavelength shifting fibres, readout by SiPM at the ends, to extract the light from the opaque LS. Several advances are made possible by LiquidO technology: strong background rejection, thanks to the LiquidO PID capability, and the possibility of loading dopants at high concentrations, since transparency is no longer required. This opens the possibility of a large number of new physics measurements in several areas of neutrino, nuclear and accelerator physics as well as medical application, many of which are under active exploration. In my talk I will introduce the LiquidO principle and I will report the results, recently published on Communication Physics, of a proof of principle test. I will also present some preliminary results on a small scale prototype called โMini-LiquidOโ whose aim is the demonstration and precise performance qualification of the LiquidO technique for physics applications, as well as the future R&D program.
The RES-NOVA project will hunt neutrinos from core-collapse supernovae (SN) via coherent elastic neutrino-nucleus scattering (CEฮฝNS) using an array of archaeological lead (Pb) based cryogenic detectors. The high CEฮฝNS cross-section on Pb and the ultra-high radiopurity of archaeological Pb enable the operation of a high statistics experiment equally sensitive to all neutrino flavors with
reduced detector dimensions in comparison to existing neutrino observatories. The first phase of the RES-NOVA project is planned to operate a detector with a volume of (60 cm)$^3$. It will be sensitive to SN bursts from the entire Milky Way Galaxy with >3ฯ sensitivity while running PbWO$_4$ detectors with 1 keV energy threshold. RES-NOVA will discriminate core-collapse SNe from black-holes forming collapses with no ambiguity even with such small volume detector. The average neutrino energy of all flavors, the SN neutrino light curve, and the total energy emitted in neutrinos can potentially be constrained with a precision of few %. We will present the performance of the first prototype detectors, and sensitivity projections for the full detector. We will demonstrate that RES-NOVA has the potential to lay the foundations for a new generation of neutrino observatories , while relying on a very simple and modular experimental setup.
Very-high energy physics (VHEP) is the development of a higher energy frontier that is complementary to HEP using accelerators to investigate interactions in space caused by fundamental particles and to study the structure and fundamental interactions of elementary particles. Probing for VHE elementary particles will also enable the discovery of VHE celestial objects and the elucidation of their phenomena. In VHE objects, 1st- and 2nd-generation neutrinos should be emitted with photons from the process by accelerated protons, but they remain unidentified. Comparison of VHE neutrino and photon flux spectra from the same object will also allow for fundamental particle physics and cosmological investigations. VHE neutrinos and photons can also probe super-heavy dark matter.
Neutrinos and photons are well-known elementary particles from the Standard Model and travel straight in magnetic fields, making them powerful probes for very-high energy physics and astronomy (VHEPA). Neutrino oscillations cause neutrino fluxes to homogenize between generations during propagation. Tau neutrino observations are also tau appearance experiments. VHE tau neutrinos skim the earth, are converted to tau, and after decaying in air become an upward air shower at a shallow elevation angle, emitting Cherenkov and fluorescence light.
The Neutrino Telescope Array (NTA) unit is a unique wide-angle, high-precision optical system with an optical bifurcation trigger imaging system, based on Ashra-1, the first Earth-skimming tau search from a celestial object. The NTA, consisting of four stations installed near the summit, simultaneously binocularly images air shower Cherenkov light and fluorescence above 1 TeV emitted within a 360$^{\circ}$ $\times$ 32$^{\circ}$ field of view with a high precision of less than 0.1$^{\circ}$. This detection scheme is particularly powerful for the simultaneous monitoring observation of Cherenkov and fluorescence light from VHE tau and photons.
For example, it will be possible to detect and identify PeV-EeV sources hidden in the galactic plane and halo with the highest sensitivity, and to search for dark matter with high efficiency. We will report on the possibilities for VHEPA, opened up by the superior performance of NTA.
The laboratory section included with the introductory Physics and Astronomy courses provide a student with the practical experience and initial laboratory skills that would be further honed by higher level courses. The quality and variety of that experience as well as the practical applicability are important.
Since the start of the current pandemic and restrictions on lab sessions (later replaced with limitation of number of students attending), a new approach had to be found to provide a meaningful lab experience to students. This presentation will show the new experiments and the innovative methodology that were introduced with the use of the tracker software (https://physlets.org/tracker/) in the introductory Physics I, II and Introductory Astronomy laboratory. This new methodology not only expanded the number of possible lab experiments, but provided students with ability to conduct some of the simple experiments at home using common household items, thus offering not only the exciting experience to students but also the backup option in the case of further restrictions to students attendance of lab facilities.
The communities of astrophysics, astronomers and high energy physicists have been pioneers in establishing Virtual Research and Learning Networks (VRLCs)[1] generating international productive consortiums in virtual research environments and forming the new generation of scientists. In this talk we will discuss one in particular: LA-CoNGA Physics (Latin American alliance for Capacity buildiNG in Advance physics) [2].
LA-CoNGA physics aim to support the modernization of the university infrastructure and the pedagogical offer in advanced physics in four Latin American countries: Colombia, Ecuador, Peru and Venezuela. This virtual teaching and research network is composed of 3 partner universities in Europe and 8 in Latin America, high-level scientific partners (CEA, CERN, CNRS, DESY, ICTP), and several academic and industrial partners. The project is co-funded by the Education, Audiovisual and Culture Executive Agency (EACEA) of the European Commission.
Open Science education and Open Data are at the heart of our operations. In practice LA-CoNGA physics has created a set of postgraduate courses in Advanced Physics (high energy physics and complex systems) that are common and inter-institutional, supported by the installation of interconnected instrumentation laboratories and an open e-learning platform. This program is inserted as a specialization in the Physics masters of the 8 Latinamerican partners in Colombia, Ecuador, Peru and Venezuela. It is based on three pillars: courses in high energy physics theory/phenomenology, data science and instrumentation.
In the current context, VRLCs and e-learning platforms are contributing to solve challenges, such as distance education during the COVID19 pandemic.
[1] http://www.oecd.org/sti/inno/international-distributed-research-infrastructures.pdf
[2] http://laconga.redclara.net
Art & Science across Italy is an INFN/CERN project for the Italian High School students (16-18 y.o.). More than 10.000 students joined since the 2016.
Creativity and vision capability are common to many disciplines and are involved in artistic and scientific thinking and activities. Scientists and artists are often asked to see and think beyond the perceivable reality, to imagine aspects of things and events, which can be better seen from an unusual perspective. The main idea is to put in practice the basic concept of the STEAM field in which neither STEM nor arts are privileged over the other, but both are equally in play. Therefore, our aim is to engage high school students with science using artistic languages, regardless of studentsโ specific skills or level of knowledge.
The remarkable similarity in the creative processes of science and art provides a unique basis for innovative collaborations between particle physics and fine art developing approaches to enthuse young people regarding STEAM subjects.
โThe Sketchbook and the Collider'' is an art|science collaboration and it has developed into an evolving, travelling package of art exhibitions and supporting educational workshops and events with recent significant advances in three aspects: moving image work and its integration into the drawing installations, drawings on a large scale to reflect not only the massive machinery necessary for particle physics research but encouraging a more active role for the viewer, and finally introducing the practical workshops, originally designed for 14 to 16-year-olds, to much younger pupils aged 7-10 years.
The aspiration is to stimulate inquiry and the development of a โcreative curiosityโ enhancing learning through creative activities, where physics concepts are communicated through visual arts. The development, trialling, and evaluation of โThe Sketchbook and the Collider'' is presented and reflected upon and the relationship between the educational workshops and the fine art exhibitions is explored.
Abstract
Stavros Katsanevas
Director European Gravitational Observatory
Coordinator REINFORCE
I will report on REINFORCE (REsearch INfrastructures FOR Citizens in Europe, 2019-2022) coordinated by the European Gravitational Observatory (EGO) and supported by the European Unionโs Horizon 2020 SWAFS program. It has developed demonstrator projects in the leading citizen-science platform, Zooniverse, engaging citizens in four frontier-physics research domains. Citizen scientists participate in the analysis of:
a) transient-noise signals, known as โglitchesโ, which are mostly of environmental origin, in data from the Virgo gravitational-wave detector, concurrently participating in the improvement of the sensitivity of the detector through the follow-up of events that may have been caused by, for example, earthquakes, thunderstorms, electromagnetic and anthropogenic noise, and in the eventual discovery of potentially genuine events of cosmological origin;
b) bioluminescence and bioacoustic data from KM3NeT, in the context of neutrino astronomy, helping to optimise detection strategies for cosmic neutrinos, while in parallel participating in a study of marine life in the deep sea environment in the vicinity of the ORCA and ARCA detectors located in the Mediterranean sea;
c) high-energy physics data from the ATLAS experiment at CERN, contributing to the search for complex Higgs boson decays and for new long-lived particles, going beyond visual classification to sonification and comparisons with machine-learning methods;
d) and cosmic-ray data, exploring the connections across the fields of cosmic-ray physics, geology, volcanology and archaeology, through the use of data and simple experimental devices used as radiographic-exploratory and monitoring tools.
All four demonstrator projects interact transversally with an Inclusion work package, aimed at developing sonification techniques to not only provide access to the data to visually-impaired people, but also to increase the perceptual capabilities of general scientific efforts, investigating the ability to distinguish signal from background using the different senses. The citizen scientistsโ work is also used in conjunction with machine-learning algorithms, effectively mixing human and artificial intelligence.
Four further work packages cover engagement strategies, exploration and evaluation of the resilience of citizen-scientist endeavours over time (retention of which is a well known challenge for all citizen-science projects) and the preparation of a roadmap comparing analogous experiences in other large research infrastructures and augmenting the embedding of the experience in the social fabric through: a) the use of techniques avoiding the instrumentalization of citizens simply as โclassification machinesโ; b) multi-sensorial strategies; c) generalisation of the use of distributed sensors under the responsibility of citizen scientists; d) the use of new mobile applications; e) extension to senior and disabled citizens as well as traditional ecological knowledge users; f) the inclusion of critical thinking and art and science.
In short, REINFORCE attempts to increase the embedding of humanity in the four antique notions of cosmos: cosmos as Universe; cosmos as the geosphere that surrounds us; cosmos as society; and finally, cosmos as the internal world of sensorial representations.
Where do we want to go? Learning from research, recommendations, experience
The discovery of the Higgs boson at CERNโs Large Hadron Collider 10 years ago was a watershed moment in the evolution and precision of the Standard Model. ICHEP, and the high energy physics global community, celebrate this anniversary by charting the measurable impact on HEP research and related experiments over the past decade.
2022 also marks 10 years since CERN established its first Diversity & Inclusion (โD&Iโ) Programme. As an international organisation, CERN strives to reflect the diverse populations of its Member States and the diverse communities the Laboratoryโs science serves. To this end, CERNโs many D&I-related actions and policy development since 2012 have enhanced the working conditions and the diversity of its ~4000 employed members of personnel. Moreover, efforts to increase awareness of CERNโs diversity value contribute to a conducive and collaborative work environment for its ~12,000 visiting scientists. Nevertheless, some D&I initiatives, notably to increase women in STEM, remain a work in progress.
This talk will chart the progress and the challenges of embedding a culture of Diversity and Inclusion in big science. It will overview the highlights of the D&I Programme's activities, including the appointment of the first D&I Programme Leader in 2011, the implementation of significant diversity-related measures in 2015, and in 2020 the endorsement by Senior Management of CERNs first gender-based target.
GENERA Network originated from the EU-funded GENERA project, which was supporting physics research organisations in their efforts to enhance gender equality in the time from September 2015 until August 2018. The network could be continuously expanded to currently 41 organisations from all over Europe. GENERA Network is based on a Memorandum of Understanding and invites interested institutions to join. Its vision is to support, coordinate and improve gender equality policies in physics research organisations in Europe and world-wide.
In monthly meetings good practice and experience in implementing gender equality is shared among the members and friends of GENERA Network. Thematic working groups have been established to e.g. conceptualize workshops and develop training material to provide physicists on all career levels and institutions in physics with adequate knowledge on gender equality in physics. Another important topic is to figure out whether a gender perspective is also present in physics and the math-intensive fields of sciences.
In our contribution we will present an overview on GENERA Network and its activities.
The Particle Physics Community Planning Exercise, โSnowmassโ, is a prioritization process for scientific goals in the 5โ10-year time scale organized by the Division of Particles and Fields of the American Physical Society. The topics of the extended forum are organized into 10 Frontiers, and each Frontier is further divided into Topical Groups. Community members organize within and across the Topical Groups to submit Letters of Interest on particular topics, which are used to coordinate Contributed (White) Papers. This talk focuses on the Contributed Papers submitted to the Diversity, Equity, and Inclusion Topical Group of the Community Engagement Frontier of Snowmass, as well as their synthesization into the DEI Topic Group Report. The topics covered by the Contributed Papers range from pinpointing obstacles to members of marginalized communities in the United States, to analyzing Cultural-Climate Reports of institutes and collaborations, to identifying imbalances in power dynamics which lead to inequitable research environments. This spring and summer the Contributed Papers will be condensed into increasingly encompassing reports aiming to provide clear recommendations to the funding agencies and the general HEP community.
High energy physics is an international endeavor. Yet in Africa, research, education, and training programs in high energy physics (HEP) are limited in both human capacity and expertise, as well as in resource mobilization. Africa will participate meaningfully in HEP programs when the environment within the international HEP community is conducive, welcoming, and supportive.
Diversifying the workforce in high energy physics & astrophysics (HEPA) has for several years been a priority for the international community. Such international HEPA equality, diversity, and inclusion discussions must be inclusive of Africa.
In this talk I will address the structural and cultural changes that need to occur within the international HEP community to enable equitable access and success for Africa in international HEP programs. The impact of such actions on HEP education and research programs in Africa will be pointed out.
Hadronic resonances having short lifetimes are very useful to study the hadron-gas phase that characterizes the late-stage evolution of high energy nuclear collisions. Indeed, regeneration and rescattering processes occurring in the hadron gas modify the measured yields of hadronic resonances and can be studied by measuring resonance yields as a function of system size and by comparing to model predictions with and without hadronic interactions. Measurements of the differential yields of resonances with different lifetime, mass, quark content, and quantum numbers help in understanding particle production mechanisms, lifetime of the hadronic phase, strangeness production, parton energy loss, rapidity yield asymmetry and collective effects. With its excellent tracking and particle identification capabilities, the ALICE experiment has measured a comprehensive set of both meson and baryon resonances. We present recent results on resonance production in pp, p-Pb, Xe-Xe and Pb-Pb collisions at various centre-of-mass energies, highlighting new results on $\Sigma$(1385) and $\Xi$(1820), thus extending to higher mass the study of baryonic resonances at the LHC. The obtained results are used to study the system-size and collision-energy evolution of transverse momentum spectra, yields, mean transverse momentum, yield ratios to stable hadrons, and nuclear modification factors. These results are compared to lower energy measurements and model calculations where available.
We consider the experimental data on yields of protons, strange ฮโs, and multistrange baryons (ฮ, ฮฉ), and antibaryons production on nuclear targets, and the experimental ratios of multistrange to strange antibaryon production, at the energy region from SPS up to LHC, and compare them to the results of the Quark-Gluon String Model calculations. In the case of heavy nucleus collisions, the experimental dependence of the ฮ+/ฮ, and, in particular, of the ฮฉ+/ฮ ratios, on the centrality of the collision, shows a manifest violation of quark combinatorial rules.
In the recent years ALICE has carried out many measurements of the production of light nuclei in pp and p-Pb collisions and at different energies. A clear dynamics with multiplicity arises when combining all these measurements, however the theory interpretation of this evolution is still debated. In this presentation, the measurements of the ratio between the production yields of nuclei and protons and of the coalescence parameters $B_2$ and $B_3$ as a function of multiplicity are shown and compared with the prediction of the statistical hadronisation model and of the coalescence model. Moreover, the measurement of the coalescence parameters $B_2$ and $B_3$ in high-multiplicity pp collisions as a function of the transverse momentum are compared with theoretical predictions that take into account both the form of the nuclear wavefunction and the dependence of the source size on the transverse momentum of the nucleons. Finally, by comparing the production of (anti)deuterons in-jet and out-of-jet, it has been possible to observe an increase of nuclear production due to the collimated emission of nucleons.
The rapidity dependence of particle production contains information on the partonic structure of the projectile and target and is, in particular at LHC energies, sensitive to non-linear QCD evolution in the initial state. At LHC, collision final states have been mainly studied in the central kinematic region, however, there is a rich opportunity for measurements in the forward direction, which probe the nucleon structure at small Bjorken-x values. Moreover, investigating the system-size dependence of the particle production at the same collision energy is particularly important for directly studying nuclear effects.
In the first part of the talk, the final Run 1 and 2 particle-production results at forward rapidities will be presented for pp, p-Pb, and Pb-Pb collision systems, where ALICE has unique coverage. When combined, the Forward Multiplicity and the Silicon Pixel Detectors can measure charged particles over a wide range of $-3.4<\eta<5.0$. The Photon Multiplicity Detector has complementary coverage for neutral-particle production over the kinematic range $2.3<\eta<3.9$.
In the second part of the presentation, we will introduce the upgraded Run 3 ALICE configuration. The new Monolithic Active Pixel Sensors-based Inner Tracking System allows full tracking and vertexing for $|\eta|<2.5$. When combined with the new Muon Forward Tracker, the tracking can be extended to cover $-3.6<\eta<2.5$. The performance of the new detectors and the tracking/matching algorithms will be presented for the $\sqrt{s}=900$ GeV pp pilot-beam data taking in autumn 2021.
Recent multiplicity-dependent studies of particle production in pp and p-Pb collisions have shown similar features as in heavy-ion collisions. Measurements using resonances could help to understand the possible onset of collective-like phenomena and a non-zero lifetime of the hadronic phase in a small collision system. Measurements of the differential yields of resonances with different lifetimes, masses, quarks contents, and quantum numbers will provide information on the mechanisms that influence the shape of particle momentum spectra, the lifetime of the hadronic phase, strangeness production, parton energy loss, and collective effects. This talk presents new ALICE results on various hadronic resonances in small collision systems at LHC energies, including the multiplicity dependent measurements of $\Lambda$(1520) and charged K$^*$ and the production of $\phi$-meson pairs. The results will be compared with model calculations and measurements at lower energies.
Relativistic heavy-ion beams at the LHC are accompanied by a large flux of equivalent photons, leading to multiple photon-induced processes. This talk presents a series of measurements of such processes performed by the ATLAS Collaboration. New measurements of exclusive dilepton production (electron, muon, and tau pairs) are discussed. These processes provide strong constraints on the nuclear photon flux and its dependence on the impact parameter and photon energy. In particular, measurements of the cross-sections in the presence of forward neutrons provide an additional experimental handle on the impact parameter range sampled in the observed events. Furthermore, the tau-pair production measurements can constrain the tau lepton's anomalous magnetic dipole moment. High statistics measurements of light-by-light scattering shown in this talk provide a precise and unique opportunity to investigate extensions of the Standard Model, such as the presence of axion-like particles. Presented measurements of muon pairs produced via two-photon scattering processes in hadronic Pb+Pb collisions provide a novel test of strong-field QED and can be a potentially sensitive electromagnetic probe of the quark-gluon plasma. These include the dependence of the cross-section and angular correlation on the mean-p_T of the dimuon pair, the rapidity separation between the muons, and the angle that the pair makes with the second-order event-plane. Presented results are further compared with recent theory calculations.
Measurements of hard processes in heavy-ion collisions provide powerful and broad information on the dynamics of the hot, dense plasma formed in relativistic nucleus-nucleus collisions. This talk gives an overview of the latest jet measurements with the ATLAS detector at the LHC, utilizing the high statistics 5.02 TeV Pb+Pb data collected in 2015 and 2018. This talk presents multiple measurements of jet production and structure using novel analysis techniques. New results sensitive to the role of color-charge on jet quenching using EW boson-tagged jets will be also shown. Further, the single jet yields as a function of the azimuthal angle with respect to the 2nd, 3rd, and 4th event planes, and a new measurement of the dijet momentum balance will be discussed. A particular focus of the measurements is the systematic comparison of fully unfolded data to state-of-the-art theoretical models.
The Cryogenic Underground Observatory for Rare Events (CUORE) is the first bolometric experiment searching for 0ฮฝฮฒฮฒ decay that has been able to reach the one-tonne mass scale. The detector, located at the LNGS in Italy, consists of an array of 988 TeO2 crystals arranged in a compact cylindrical structure of 19 towers. CUORE began its first physics data run in 2017 at a base temperature of about 10 mK and in April 2021 released its 3rd result of the search for 0ฮฝฮฒฮฒ, corresponding to a tonne-year of TeO2 exposure. This is the largest amount of data ever acquired with a solid state detector and the most sensitive measurement of 0ฮฝฮฒฮฒ decay in 130Te ever conducted, with a median exclusion sensitivity of 2.8ร10^25 yr. We find no evidence of 0ฮฝฮฒฮฒ decay and set a lower bound of 2.2 ร10^25 yr at a 90% credibility interval on the 130Te half-life for this process. In this talk, we present the current status of CUORE search for 0ฮฝฮฒฮฒ with the updated statistics of one tonne-yr. We finally give an update of the CUORE background model and the measurement of the 130Te 2ฮฝฮฒฮฒ decay half-life, study performed using an exposure of 300.7 kgโ yr.
CUPID-0 is a pilot experiment in scintillating cryogenic calorimetry for the search of neutrino-less double beta decay $(0\nu\beta\beta)$. 26 ZnSe crystals were operated continuously in the first project phase (March 2017 - December 2018), demonstrating unprecedented low levels of background in the region of interest at the Q-value of $^{82}$Se. From this successful experience comes a demonstration of full alpha to beta/gamma background separation, the most stringent limits on the $^{82}$Se $0\nu\beta\beta$, as well as the most precise measurement of the $^{82}$Se half-life $(2\nu\beta\beta)$. After a detector upgrade, CUPID-0 began its second and last phase (June 2019 - February 2020). We present the latest results on the $0\nu\beta\beta$ decay of $^{82}$Se, both to the ground and excited states, with the full isotope exposure of 8.82 kg $\times$ yr. We set a lower bound to the ground state $0\nu\beta\beta$ half life T$_{1/2}(^{82}\rm{Se}) > 4.6 \times 10^{24}~$yr (90% C.I.). We review the most recent results from a Bayesian search for spectral distortions to the $^{82}$Se double-beta decay spectrum due to exotic decay modes.
The GERmanium Detector Array (GERDA) experiment searched for the lepton-number-violating neutrinoless double-ฮฒ (0ฮฝฮฒฮฒ) decay of 76Ge. Observing such a decay would allow to shed light onto the nature of neutrinos and its discovery would have far-reaching implications in cosmology and particle physics. By operating an array of high purity bare germanium detectors, enriched in 76Ge, in an active liquid argon shield aided by pulse shape discrimination of germanium detector signals, GERDA achieved an unprecedentedly low background index of 5.2 ร 10โ4 counts/(keV kg yr) in the signal region and was able to surpass the design goal of collecting an exposure of 100 kg yr in a background-free regime. With a total exposure of 127.2 kg yr combining Phase I and Phase II, no signal was observed and a lower limit on the half-life of 0ฮฝฮฒฮฒ decay in 76Ge is set at T1/2 > 1.8 ร 1026 yr at 90% C.L., which coincides with the sensitivity assuming no signal.
KamLAND-Zen searches for neutrinoless double beta (0nbb) decay with Xe-136 loaded liquid scintillator (LS). 0nbb decay violates lepton number conservation and it requires two characteristic neutrino properties; non-zero mass and Majorana nature of the neutrino. Assuming the minimal mechanism of the decay, it would constrain the neutrino mass hierarchy and mass scale.
After successful completion of KamLAND-Zen 400, KamLAND-Zen 800 started data taking in 2019 with almost double amount of xenon and a low radioactive LS container. In this talk, we will present the latest result of KamLAND-Zen 800 with 1 ton yr exposure.
The search for neutrinoless double beta ($0\nu \beta \beta$) decay is important because its discovery would reveal a lepton-number violating process and its connection to the origin of the neutrinos masses.
The LEGEND collaboration follows the GERDA and MAJORANA Demonstrator collaborations with the mission to build a ton-scale $ ^{76} $Ge based experiments. As a first phase, LEGEND-200 is being installed in the upgraded GERDA infrastructure at Laboratori Nazionali del Gran Sasso of INFN. It is based on 200 kg of High-Purity Germanium (HPGe) detectors and aims to reach a discovery sensitivity on the $0\nu \beta \beta$ half-life of 10$^{27}$ years. LEGEND-1000 will rely on novel high-mass HPGe detectors and several other novel techniques to be tested in LEGEND-200. An overview of the LEGEND-200 setup and its installation status will be provided.
The Large Enriched Germanium Experiment for Neutrinoless $\beta\beta$ Decay (LEGEND) is a ton-scale, $^{76}$Ge-based, neutrinoless double-beta ($0\nu\beta\beta$) decay experimental program with a discovery potential of half-lifes beyond 10$^{28}$ years.
LEGEND takes a phased approach that enables the collaboration to gradually increase the detector mass and exposure, and at the same time reduce the background in the signal region of interest. The first, 200-kg, phase of the experiment (LEGEND-200) is currently being commissioned at the Gran Sasso underground laboratory (Laboratori Nazionali del Gran Sasso, LNGS) in Italy. The subsequent ton-scale phase of the experiment (LEGEND-1000) is currently in the conceptual design stage and construction is expected to start as early as 2025.
In this contribution the physics reach, background requirements as well as the conceptual design and timeline of LEGEND-1000 will be presented.
The search for neutrinoless double-beta (0ฮฝฮฒฮฒ) decay aspires to cast light on a critical piece missing in our knowledge: the nature of the neutrino mass. This is the most sensitive experimental way to demonstrate that neutrino is a Majorana particle.
The challenge of observing such a potentially rare process demands a detector with excellent energy resolution, extremely low radioactivity and a large mass of emitter isotope. The R2D2 project is an R&D effort to investigate the feasibility of a high-pressure spherical TPC as a detector for 0ฮฝฮฒฮฒ decay searches. A prototype has demonstrated excellent resolution with Argon, and preliminary results with Xenon are very promising. Furthermore, the simultaneous read-out of ionisation and scintillation light has been demonstrated, which will facilitate event localisation.
These proof-of-concept results obtained with the first R2D2 prototype will presented, and the next steps in the R&D roadmap will be discussed.
During Run 3, the LHC will deliver instantaneous luminosity in the range 5 x 10^34 cm^-2 s^-1 to 7 x 10^34 cm^-2 s^-1. To cope with the high background rates and to improve the trigger capabilities in the forward region, the muon system of the CMS experiment has been upgraded with two new stations of detectors (GE1/1), one in each endcap, based on triple-GEM technology. The system was installed in 2020 and consists of 72 ten-degree Super Chambers, each made up of two layers of triple-GEM detectors. GE1/1 provides two additional muon hit measurements which will improve muon tracking and triggering performance. We report on the status of the ongoing commissioning phase of the detector and present preliminary results obtained from cosmic-ray events. We discuss detector and readout electronics operation, stability and performance, and preparation for Run 3. Particular attention will be given to issues encountered during CMS magnet commissioning which induced trips and short-circuits in the GEM detectors.
After Run 3, the Large Hadron Collider (LHC) will be upgraded to its High Luminosity phase (HL-LHC). The triggering capabilities in the forward region of the CMS detector will be enhanced to accommodate the dramatic increase in collision rate. New stations of triple-layer Gas Electron Multiplier (GEM) detectors will be installed in the endcap regions of the CMS muon system. The first set of two GEM stations (GE1/1) are installed. A second set of two stations (GE2/1) will consist of 72 20-degree๏ chambers, 36 in each endcap. These chambers will improve trigger performance by measurement of the muon bending angle. In this talk we discuss the design modification between GE1/1 and GE2/1, the performance of a GE2/1 prototype installed in the CMS detector, and progress towards installation of the full GE2/1 system.
The High Luminosity LHC (HL-LHC) program will pose a great challenge for the different components CMS Muon Detector. Existing systems, which consist of Drift Tubes (DT), Resistive Plate Chambers (RPC) and Cathode Strip Chambers (CSC), will have to operate at 5 times larger instantaneous luminosity than designed for, and, consequently, will have to sustain about 10 times the original LHC integrated luminosity. Additionally, to cope with the high rate environment and maintain good performance, additional Gas Electron Multiplier (GEM) and improved RPC (iRPC) detectors will be installed in the innermost region of the forward muon spectrometer of the CMS experiment. The design of these new detectors will have to assure their long-time operation in a hard environment. Finally, RPC and CSC use gases with a global warming potential (GWP) and therefore a search for new eco-friendly gases is necessary, as part of the CERN-wide program. To address all of these challenges a series of accelerated irradiation studies have been performed for all the muons systems, mainly at the CERN Gamma Irradiation Facility (GIF++), or with specific X-ray sources. In this talk will be reported the status of the studies on the longevity of the different systems of the CMS Muon Detector, after the large integrated charge in the last years. Additionally, actions taken to reduce the actual detector aging and to minimize greenhouse gas consumption will be discussed.
The muon spectrometer of the ATLAS detector has recently undergone a major upgrade in preparation for operation under experimental conditions foreseen at the High-Luminosity LHC (HL-LHC). Two New Small Wheels (NSW) have been constructed and installed to replace the first muon stations in the high-rapidity regions of ATLAS detector. This new system is designed to provide improved muon trigger momentum resolution and fake rate rejection in the forward region of the detector, in order to maintain the current ATLAS physics capability under the higher background environment of HL-LHC. The NSW has an active area of more than 1200 m^2 and is equipped with with multiple layers of two novel detector technologies: small-strips Thin Gap Chambers (sTGC) and Micromegas (MM). With an active area of more than 1200 m^2, the ATLAS New Small Wheels are the first large scale use of Micromegas technology in high-energy experiments. Latest results from the commissioning of the NSW in preparation for the LHC Run-3 data taking, as well as initial performance measurements, will be presented.
After three years of shutdown (LS2), the LHC restarted in April 2022 and the plan is to run at an average instantaneous luminosity of 2.0 x 1033cm-2s-1 at the LHCb interaction point, a factor 5 higher than the previous runs. In order to cope with the increased luminosity and to take data at the full bunch crossing frequency (30MHz visible interaction rate) in trigger-less mode, the LHCb detector has just undergone a major upgrade, which will allow LHCb to collect ~ 50 fb-1 in the next 10 years. The LHCb Muon Detector has performed exceptionally well in the last ten years, providing Muon track detection efficiency of 99% in Run1 and 97.4% in Run2. Its main upgrade consists in the new off detector and control electronics, able to cope with the full LHC bunch crossing frequency in trigger-less mode. A phase 2 upgrade of the LHCb detector has also been proposed, for the further increase of the instantaneous luminosity foreseen by LHC (High Lumi LHC). In the proposed talk we will present the upgraded Muon detector, with particular focus on the installation and commissioning activities, the results of the functional tests performed during the LS2 and the very first and preliminary performance studies with new data. An overview of the proposed upgrade 2 Muon detector will be also presented.
Recent CMS measurements of rare B0s meson properties are discussed, including the branching fractions and effective lifetimes. The studies are based on the data collected in pp collisions with sqrt(s) = 13 TeV with the CMS experiment at the LHC.
The persistent hints of LFU violation in $b \to s \ell \ell$ may imply an existence of leptoquarks close to the TeV scale that couple to $b\mu$ and $s\mu$. These leptoquark Yukawa couplings can in full generality be complex and thus provide a new source of CP violation. We show that a large CP phase with a definite sign is perfectly viable for an $S_3$ leptoquark of mass below a few TeV, consistent with CP even and CP odd $b \to s \ell \ell$ and $B_s$ mixing observables. Furthermore, we show how the direct CP asymmetries in $B\to K \mu \mu$ are significantly enhanced in the vicinity of narrow charmonia, and that their measurement in the future could provide important additional information in constraining the potentially CP violating NP in $b \to s \ell \ell$. Possibilities for constraining such CP phases at the LHC and future colliders will also be presented.
Very rare B-meson decays, in particular Flavour-Changing Neutral-Current processes of the type B(s)-> l+ l-, are an excellent tool to search for New Physics, as they provide a clear experimental signature and they can be predicted very precisely in the Standard Model. Latest measurements of these processes using the large sample of beauty-hadron decays collected by the LHCb experiment during Run 1 and 2 of the LHC, corresponding to an integrated luminosity of 9/fb, will be presented.
The description of the dynamics behind baryonic decays of heavy flavoured particles is very challenging from the theoretical point of view. The branching ratio enhancement close to the p-pbar threshold in multibody decays and the suppression of the branching fractions to two-body final states are very interesting features of these processes. In this presentation, the most recent results of LHCb in the search of charmless baryonic decays of beauty hadrons are reported.
Rare B-hadron decays mediated by b-> s l l transitions provide a sensitive test of Lepton Flavour Universality (LFU), a symmetry of the Standard Model by which the coupling of the electroweak gauge bosons to leptons is flavour universal. Extensions of the SM do not necessarily preserve this symmetry and may give sizable contributions to these processes. Precise measurements of LFU ratios are, therefore, an extremely sensitive probe for New Physics. Recent results from LHCb on lepton flavour universality tests in rare b-> s l l decays are discussed.
We report the results of the search for baryon-number-violating decay $B^- \to \bar\Xi_c^0 \Lambda_c^-$. We use a data sample containing 772 million $B\bar{B}$ pairs collected by the Belle detector operating at the asymmetric $e^{+}e^{-}$ collider KEKB. The results can be interpreted in terms of the previously-discovered Standard Model decay $B^- \to \bar\Xi_c^0 \Lambda_c^-$, followed by $\Xi_c^0-\bar\Xi_c^0$ oscillations. The measurements of baryon-number-violating oscillations in the heavy-baryon sector provide an avenue to investigate the origin of matter-antimatter asymmetry of the universe. The searches of lepton-flavor-violating decays, including $\Upsilon(1S) \to \ell\ell'$ $(\ell=e,\mu; \ell'=e,\mu,\tau)$, and $B \to \ell\tau$, are also reported in this presentation.
Many exotic resonances have been recently observed at the LHC and other experiments. In this report, CMS studies of exotic multiquark states are reported using the data collected in pp collisions at sqrt(s) = 13 TeV.
Recent results from the proton-proton collision data taken by the ATLAS experiment on exotic resonances will be presented. A search for $J/\psi\ p$ resonances in $\Lambda_b \to J/\psi\ p K$ decays with large $pK$ invariant masses will be reported. Studies of $Z_c$ states in $B$-meson decays with the Run 2 data at 13 TeV will also be discussed. Searches for exotic resonances in 4 muon final states will be shown.
The discoveries of meson-like exotic states have been attracting huge interest from the hadron physics community. Studies on their spectroscopy can deepen our understanding of the internal structure and dynamics of hadrons. The LHCb experiment has been making significant contributions to such studies thanks to the large dataset provided by LHC and the delicate design of the detector. This talk will present the recent results on mesonic exotic states from LHCb.
The discovery of pentaquark candidates at LHCb in 2015 led to a renaissance of exotic hadron spectroscopy. There is yet no consensus on the nature of pentaquarks, calling for further experimental efforts. The large dataset and excellent detector performance give the LHCb experiment unprecedented capability in such study. In this talk, the latest results on pentaquark study from LHCb will be discussed.
Using a scan sample taken at center-of-mass energies from 3.773 GeV to 4.95 GeV with an integrated luminosity of 22/fb, the properties of XYZ states are investigated at BESIII. The cross sections of $e^+e^- \to D^{*+} D^{*-}$ and $D^{*+} D^-$, $e^+e^- \to K^+ K^- J/\psi$, $e^+e^- \to \pi^+ \pi^- \psi_2(3823)$, and $e^+e^- \to \Lambda \bar{\Lambda}$ are measured. The new decay modes of $X(3872) \to \pi^0 \chi_c^0$ and $\pi \pi \chi_c^0$ are searched where $X(3872)$ is produced via the process $e^+e^- \to \gamma X(3872)$. Candidates of the hidden-charm tetra quark with strangeness is studied in the process of $e^+e^- \to K^+(D_s^- D^{*0} + D_s^{*-} D^0)$.
Belle II offers unique possibilities for the discovery and interpretation of exotic multiquark combinations to probe the fundamentals of QCD.
This talk present recent results on the amplitude analysis of the charmonium-like state $X(3940)$ and searches for the hidden bottom transition between $Y(10750)$ and $\chi_{bJ}$.
The LHCb collaboration recently discovered a doubly charmed tetraquark $T_{cc}$ with flavor $cc\bar u\bar d$ just $0.36(4)~$MeV below $D^0D^{*+}$ threshold. This is the longest lived hadron with explicitly exotic quark content known to this date. We present the first lattice QCD study of $DD^*$ scattering in this channel, involving rigorous determination of pole singularities in the related scattering amplitudes that point to the existence of $T_{cc}$. Working with a heavier than physical light quark mass, we find evidence for a shallow virtual bound state pole in the $DD^*$ scattering amplitude with $l=0$, which is likely related to $T_{cc}$.
The sensitivity of the Pierre Auger Observatory to ultra-high energy neutral particles, such as photons, neutrinos and neutrons, allows it to take active part to Multi-Messenger searches in collaboration with other observatories. Searches for photons and neutrinos are performed by exploiting the design of the Pierre Auger Observatory, which allows to use the different properties of cosmic ray, neutrino and photon induced showers to discriminate between them. Diffuse and point source fluxes of photons and neutrinos are searched for. Furthermore, photon and neutrino follow-ups of the gravitational wave events observed by the LIGO/Virgo Collaboration are conducted. The Pierre Auger Observatory is also used to search for neutrons from point-like sources. In contrast to photons and neutrinos, neutrons induce air showers that can not be distinguished from those produced by protons. For this reason, the search for neutrons from a given source is performed by searching for an excess of air showers from the corresponding direction. All these searches have resulted in stringent upper limits on the corresponding fluxes of the considered particles, allowing, together with the results obtained by other experiments, to shed some light on the most energetic phenomena of our Universe. An overview of the Multi-Messenger activities carried out within the Pierre Auger Collaboration is presented.
The Pierre Auger Observatory, in the south of Mendoza province (Argentina), is the largest facility in the world to observe ultra-high-energy cosmic rays (UHECR) and has been taking data for almost twenty years. It is designed to simultaneous detect the longitudinal development of the extensive air showers in the atmosphere and the measurement of particlesโ densities at ground level. This hybrid technique allowed to produce results with unprecedented precision. In this talk, I will report on the energy spectrum, mass composition and arrival directions of cosmic rays in the range of $10^{16.5}$ eV to $10^{20.0}$ eV. I will also present the upgrade of the Observatory detection system, AugerPrime, that aims at improving the observables sensitive to mass composition at the highest energies to tackle the still open questions regarding the UHECRโs origin.
In the last few years, gamma-ray astronomy opens a new window in the sub-PeV to PeV range inaugurated by the Tibet AS$\gamma$ collaboration followed by the HAWC and LHAASO collaborations. Gamma rays at this energy range are expected to be emitted by the neutral pion decay produced in the interaction between cosmic-ray particles and the interstellar matter, hence it is important to identify the origin of cosmic rays. The successful three experiments are located in the northern hemisphere and they are not able to study the southern sky where potential interesting objects are known to exist.
Andes Large area PArticle detector for Cosmic ray physics and Astronomy (ALPACA) is a project to cover the southern sub-PeV to PeV sky using a new air shower array at the plateau of the Chacaltaya mountain at the altitude of 4,740 m in Bolivia. A 83,000 m$^2$ surface area is covered by 400 scintillating counters of 100cmx100cmx5cm$^t$. In addition to this conventional surface array, underground muon detectors covering total 3,700 m$^2$ allow a clear identification of muon components in air showers. This enables us to discriminate between hadronic and electromagnetic showers and to detect weak gamma-ray signal from the dominant isotropic hadronic showers. Using this array ALPACA will explore the sub-PeV to PeV gamma-ray sky first time in the Southern hemisphere. The prime target of ALPACA is to reveal PeV cosmic-ray accelerators presumably existing in the galactic plane, including the galactic center. A prototype array ALPAQUITA consisting of 97 surface counters and 900 m$^2$ muon detectors is now under construction and planned to start data taking in 2022. The next extension to the 200 counters and 3,700 m$^2$ muon detectors is scheduled in 2023. In this contribution, a general introduction to ALPACA, the current status of ALPAQUITA with its infrastructure, and extension plan after 2023 are presented.
DArk Matter Particle Explorer (DAMPE) satellite mission, launched in December 2015, is in operation for more than 6 years. The main sub-detector, a thick imaging calorimeter BGO is capable of measuring gamma rays and cosmic-ray electrons up to about 10 TeV and cosmic ray ions up to about 100 TeV. This talk gives an overview of the mission and presents the latest results on the electron, proton and helium fluxes as well as other physical results of DAMPE.
The Calorimetric Electron Telescope (CALET), in operation on the International Space Station since 2015, collected a large sample of cosmic-ray over a wide energy interval. The instrument identifes the charge of individual elements up to nickel and beyond and, thanks to a homogeneous lead-tungstate calorimeter, it measures the energy of cosmic-ray nuclei providing a direct measurement of their spectra. A favourable opportunity for a low background measurement spectra is provided by iron and nickel, thanks to the negligible contamination from spallation of higher mass elements and to their abundance among the heavy elements. Also, they play a key role in understanding the acceleration and propagation mechanisms of charge particles in our Galaxy.
In this contribution a direct measurement of iron and nickel spectra, based on more than five years of data, are presented in the energy range from 10 GeV/n to 2 TeV/n and from 8.8 GeV/n to 240 GeV/n, respectively. The spectra are compatible within the errors with a single power law in the energy region from 50 GeV/n to 2 TeV/n and from 20 GeV/n to 240 GeV/n, respectively. Also, systematic uncertainties are detailed and the nickel to iron flux ratio is presented.
This unprecedented measurement confirms that both elements have very similar fluxes in shape and energy dependence, suggesting that their origin, acceleration, and propagation might be explained invoking an identical mechanism in the energy range explored so far.
The High Energy cosmic-Radiation Detection (HERD) facility is a future space experiment which is designed for the direct measurement of cosmic-rays (CR). The instrument will be installed aboard the Chinaโs Space Station around 2027 and is based on a homogeneous, deep, 3D segmented calorimeter. The calorimeter is surrounded by scintillating fiber trackers, anti-coincidence scintillators, silicon charge detectors, and a transition radiation detector. The HERD instrument is designed to feature a very large acceptance, of about one order of magnitude larger than previous experiments. Thanks to its innovative design, the HERD experiment will extend the measurements of cosmic rays and gamma-rays by about one order of magnitude in energy with respect to the current results. Fundamental progress in our understanding of propagation and acceleration of CR inside the Galaxy will be achieved by measuring the flux of protons and nuclei above hundreds of TeV per nucleon. By exploring the electron flux in the multi-TeV region, it will possible to search for the signature of dark matter and nearby astrophysical sources. Finally, thanks to the large field of view, the experiment will also monitor the gamma-ray sky from a few hundreds of MeV up to 1 TeV. In this contribution, a review of the current status of the experiment will be presented, with particular regards to the estimated detector performances and the expected physics result.
The Tibet ASgamma experiment is located at 4,300m above sea level, in Tibet, China. The experiment is composed of a 65,700 m2 surface air shower array and 3,400 m2 underground water Cherenkov muon detectors. The surface air shower array is used for reconstructing the primary particle energy and direction, while the underground muon detectors are used for discriminating gamma-ray induced muon-poor air showers from cosmic-ray (proton, helium,...) induced muon-rich air showers.Recently,the Tibet ASgamma experiment successfully observed gamma rays in the 100 TeV region from some point/extended sources as well as sub-PeV diffuse gamma rays along the Galactic disk. In this presentation, the observational results will be mainly presented,
followed by some future prospect.
MicroBooNE is an 85-tonne active mass liquid argon time projection chamber (LArTPC) at Fermilab. It has excellent calorimetric, spatial and energy resolution and is exposed to two neutrino beams, which make it a powerful detector not just for neutrino physics, but also for Beyond the Standard Model (BSM) physics. The experiment has competitive sensitivity to heavy neutral leptons possibly present in the leptonic decay modes of kaons, and also to scalar bosons that could be produced in kaon decays in association with pions. In addition, MicroBooNE serves as a platform for prototyping searches for rare events in the future Deep Underground Neutrino Experiment (DUNE). This talk will explore the capabilities of LArTPCs for BSM physics and highlight some recent results from MicroBooNE.
This talk presents a model independent search for an additional heavy, mostly sterile, neutral lepton (HNL) which is capable of mixing with the Standard Model tau neutrino with a mixing strength of $|U_{\tau 4}|^{2}$, corresponding to the square of the extended ย PontecorvoโMakiโNakagawaโSakata (PMNS) matrix element. HNLs are hypothetical particles predicted by many beyond Standard Model theories, which can explain oscillation anomalies as well as the baryon asymmetry in the universe through leptogenesis. HNLs can also provide dark matter candidates. We search for HNL production in the decays of the tau lepton analyzing a data set from the $BABAR$ experiment, with a total integrated luminosity of 424 fb$^{-1}$. A kinematic approach is taken and no assumptions are made regarding the model behind the origins of the HNL, its lifetime or decay modes. A binned likelihood technique is utilized and HNLs of mass $100
The LHeC and the FCC-he offer fascinating, unique possibilities for discovering BSM physics in DIS, both due to their large centre-of-mass energies and high luminosities. In this talk we will show the prospects for observing extensions of the Higgs sectors both with charged and neutral scalars, anomalous Higgs couplings and exotic decays. Then we will discuss searches for R-parity conserving and violating supersymmetry both with prompt and long-lived particles, and of feeble interacting particles like sterile neutrinos, fermion triplets, dark photons and axion-like particles. Finally we will address anomalous couplings and searches for heavy resonances like leptoquarks and vector-like quarks, excited fermions and colour-octet leptons.
Reference: P. Agostini et al. (LHeC Study Group), The Large Hadron-Electron Collider at the HL-LHC, J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
The smallness of neutrino masses, together with neutrino oscillations could be pointing to physics beyond the standard model, can be naturally accommodated by the so-called "seesaw" mechanism, in which new Heavy Neutral Majorana Leptons (HNL) are postulated. Several models with HNLs exist that incorporate the seesaw mechanism, sometimes also providing a DM candidate or giving a possible explanation for the baryon asymmetry. This talk presents searches for HNLs interpreted in such models, using both prompt and long-lived signatures in CMS using the full Run-II data-set collected at the LHC.
Extensions of the Standard Model with right handed (sterile) neutrinos pose viable explanations for the origin of neutrino masses and could solve a variety of open questions in physics such as neutrino oscillation anomalies, the nature of dark matter, and baryon asymmetry. Multiple models posit the existence of a GeV-scale, sterile neutrino (also called a Heavy Neutral Lepton (HNL)), which decays to the known Standard Model particles. HNL production from atmospheric neutrinos and the HNL's subsequent decay can produce a unique double-cascade signature in the IceCube detector, which can be utilized to search for GeV-scale HNLs at atmospheric neutrino energies. We investigate the ability of IceCube DeepCore to reconstruct and identify low-energy double-cascade topologies for HNLs in the mass range of 0.1-3 GeV.
Neutrinos are probably the most mysterious particles of the Standard Model. The mass hierarchy and oscillations, as well as the nature of their antiparticles, are currently being studied in experiments around the world. Moreover, in many models of New Physics, baryon asymmetry or dark matter density in the universe are explained by introducing new species of neutrinos. Among others, heavy neutrinos of the Dirac or Majorana nature were proposed to solve problems persistent in the Standard Model. Such neutrinos with masses above the EW scale could be produced at future linear e+e- colliders, like the Compact LInear Collider (CLIC) or the International Linear Collider (ILC).
We studied the possibility of observing production and decays of heavy neutrinos in qql final state at the ILC running at 500 GeV and 1 TeV and the CLIC running at 3 TeV. The analysis is based on the WHIZARD event generation and fast simulation of the detector response with DELPHES. Dirac and Majorana neutrinos with masses from 200 GeV to 3.2 TeV are considered. Estimated limits on the production cross sections and on the neutrino-lepton coupling are compared with the current limits coming from the LHC running at 13 TeV, as well as the expected future limits from hadron colliders. Impact of the gamma-induced backgrounds on the experimental sensitivity is also discussed. Obtained results are stricter than other limit estimates published so far.
We exhibit the geometric structure of the convex cone in the linear space of the Wilson coefficients for the dimension-8 operators involving the left-handed lepton doublet $L$ and the Higgs doublet $H$ in the Standard Model effective field theory (SMEFT). The boundary of the convex cone gives rise to the positivity bounds on the Wilson coefficients, while the extremal ray corresponds to the unique particle state in the theory of ultra-violet completion. Among three types of canonical seesaw models for neutrino masses, we discover that only right-handed neutrinos in the type-I seesaw model show up as one of extremal rays, whereas the heavy particles in the type-II and type-III seesaw models live inside the cone. The experimental determination of the relevant Wilson coefficients close to the extremal ray of type-I seesaw model will unambiguously pin down or rule out the latter as the origin of neutrino masses. This discovery offers a novel way to distinguish the most popular seesaw model from others, and also strengthens the SMEFT as an especially powerful tool to probe new physics beyond the Standard Model.
INFN CNAF is the National Center of INFN (National Institute for Nuclear Physics) for research and development in the ๏ฌeld of information technologies applied to high energies physics experiments.
CNAF hosts the largest INFN data center which also includes WLCG Tier1 site (one of 13 around the world), providing resources, support and services needed for computing and data handling in the Worldwide LHC Computing Grid (WLCG) framework. The data center also represents a key data facility for many astro-particle and nuclear physics experiments.
The Data management team manages and makes available all the Tier1 storage resources to the scientific community. Currently, such resources consist of more than 50 PB of disk storage and more than 110 PB of tape storage.
We describe the adopted technologies for Data Management and Data Transfer and how our services are evolving to cope with the requirements imposed by the High Luminosity-LHC era, in the context of a worldwide transition to new protocols and authorization approaches for bulk data transfers between WLCG sites.
Also, we report on our work to provide POSIX filesystems with different technologies: along with the bulk of data center storage, which is based on GPFS deployments, we provide CephFS as well as object and block storage service for data access requirements beyond WLCG use cases.
KOTO is a dedicated experiment to search for the New Physics through the ultra-rare decay $K_L^0 \rightarrow \pi^0 \nu \bar{\nu}$. In 2023, the $K_L^0$ beam intensity will be increased to collect $K_L^0$ decays faster. An upgrade of the data-acquisition system is hence introduced, including the expansion of the data throughput and the third-level trigger decision at the PC farm. The University of Chicago designed an electronic module with numerous high-speed optical links to transfer data from analog-to-digital converters, perform the event-building, and deliver complete events to the PC farm for the sophisticated level-3 trigger evaluation. The upgraded system can be simply expanded for more channel inputs and larger data throughput if needed in the future. The entire architecture and its performance will be presented.
The recent MODE whitepaper*, proposes an end-to-end differential pipeline for the optimisation of detector designs directly with respect to the end goal of the experiment, rather than intermediate proxy targets. The TomOpt python package is the first concrete endeavour in attempting to realise such a pipeline, and aims to allow the optimisation of detectors for the purpose of muon tomography with respect to both imaging performance and detector budget. This modular and customisable package is capable of simulating detectors which scan unknown volumes by muon radiography, using cosmic ray muons to infer the density of the material. The full simulation and reconstruction chain is made differentiable and an objective function including the goal of the apparatus as well as its cost and other factors can be specified. The derivatives of such a loss function can be back-propagated to each parameter of the detectors, which can be updated via gradient descent until an optimal configuration is reached. Additionally, graph neural networks are shown to be applicable to muon-tomography, both to improve volume inference and to help guide detector optimisation.
Studies of physics and detector performance of a possible experiment at a Muon Collider are attracting a lot of interest in the High Energy Physics community. Projections show that high precision measurements are possible as well as large new physics discovery potential. However, the presence of a large beam-induced background (BIB), generated by the muon beams decay, poses new computing and software challenges ranging from event simulation to reconstruction algorithms.
This contribution will present the strategy adopted so far to overlay the beam-induced background to the physics event and the algorithms currently employed to reconstruct the event in presence of BIB. Special attention is dedicated to the track and jets reconstruction.
The increasing number of collaborators around the world demands as well an easy to maintain and flexible infrastructure distributed across several countries. The solutions currently adopted will be also presented.
Background modelling is one of the main challenges of particle physics analyses at hadron colliders. Commonly employed strategies are the use of simulations based on Monte Carlo event generators or the use of parametric methods. However, sufficiently accurate simulations are not always available or may be computationally costly to produce in high statistics, leading to uncertainties that can limit the sensitivity of searches. On the other hand, parametric methods rely on the use of a functional form with free parameters to fit the observed data, which may bias the extraction of a potential signal.
A novel approach for non-parametric data-driven background modelling is presented, which addresses these issues for a broad class of searches and measurements [1]. This approach relies on a relaxed version of the event selection to estimate conditional probability density functions. Two different methods are provided for its implementation. The first is based on ancestral sampling and uses the data from the relaxed selection to obtain a graph of probability density functions of the relevant variables, accounting for the most significant correlations. A background model is generated by sampling events from this graph, before the full event selection is applied. This provides a robust implementation for cut-and-count based analyses. The strategy is further expanded in the second implementation, in which a generative adversarial network is trained to estimate the joint probability density function of the variables used in the analysis, conditioned on the variable used to blind the signal region. This training proceeds in the sidebands, and the conditional probability density function is interpolated into the signal region to estimate the background. The application of each implementation is presented and their performance is discussed.
[1] https://arxiv.org/abs/2112.00650
The PICO-60 C$_3$F$_8$ dark matter detector is a bubble chamber consisting of 52 kg of C$_3$F$_8$ operating at 2.45-keV and 3.29-keV thermodynamic thresholds, reaching exposures of 1404-kg-day and 1167-kg-day, respectively. The detector was located at SNOLAB, 2 km underground in Sudbury, Ontario in Canada. This experiment set the most stringent direct-detection constraints to date on the WIMP-proton spin-dependent cross-section at $2.5 \times 10^{โ41}$ cm$^2$ for a WIMP mass of 25 GeV/c$^2$. The physics program of PICO bubble chambers will be presented in this talk, including the latest results from the PICO-60 detector, setting leading limits on the couplings for photon-mediated interactions using non-relativistic contact operators in an effective field theory framework. Leading limits for dark matter masses between 2.7 GeV/c$^2$ and 24 GeV/c$^2$ were set for couplings to the electromagnetic current through higher multipole interactions, such as anapole moment, electric, and magnetic dipole moments, and millicharge. The current status of the PICO-40L and PICO500 detectors will also be presented.
PROSPECT is a reactor antineutrino experiment designed to search for short-baseline sterile neutrino oscillations and to perform a precise measurement of the U-235 reactor antineutrino spectrum. The PROSPECT detector collected data at the High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory, with the ~4-ton volume covering a baseline range of 7 m to 9 m. To operate in this environment with tight space constraints, limited overburden, and the possibility of reactor-correlated backgrounds, the PROSPECT antineutrino detector incorporates design features that provide excellent background rejection. These include detector segmentation and the use of Li-6 doped liquid scintillator with high light yield, world-leading energy resolution, and good pulse-shape discrimination properties. This talk will describe the operations of PROSPECT at HFIR and report on the latest results from the experiment. Additionally, a flux of upscattered, sub-GeV dark matter would induce a characteristic diurnal sidereal modulation in PROSPECT. A dedicated search for this modulation is used to set new experimental constraints on sub-GeV dark matter exhibiting large interaction cross sections.
We present a particle physics model to explain the observed enhancement in the Xenon-1T data at an electron recoil energy of 2.5 keV. The model is based on a U(1) extension of the Standard Model where the dark sector consists of two essentially mass degenerate Dirac fermions in the sub-GeV region with a small mass splitting interacting with a dark photon. The dark photon is unstable and decays before the big bang nucleosynthesis, which leads to the dark matter constituted of two essentially mass degenerate Dirac fermions. The Xenon-1T excess is computed via the inelastic exothermic scattering of the heavier dark fermion from a bound electron in xenon to the lighter dark fermion producing the observed excess events in the recoil electron energy. The model can be tested with further data from Xenon-1T and in future experiments such as SuperCDMS.
Despite the lack of experimental confirmation of the Migdal effect, several underground direct dark matter experiments are exploiting this rare atomic phenomenon to extend their sensitivity to light WIMP-like candidates. However, this effect is yet to be observed in nuclear scattering. The Migdal in Galactic Dark mAtter expLoration (MIGDAL) experiment aims to make the first unambiguous particle detector-based observation of the Migdal effect.
An Optical Time Projection Chamber (OTPC) will be used to image ionisation tracks originating from the same vertex, one belonging to a nuclear recoil and the other to an electron, which is the Migdal event signature. The nuclear recoils will be generated inside the detector's gaseous volume by the scattering of fast neutrons from intense DT and DD generators, allowing the effect to be explored across a wide range of nuclear recoil energies. The OTPC is outfitted with two glass-GEMs that enable high gain operation in a 50-Torr CF4-based gas mixture, as well as a photomultiplier tube and a fast low-noise CMOS camera to collect light from the initial ionisation and avalanche processes, respectively. A charge readout consisting of a 120 ITO strip anode is also included in the detector for timing information.
The MIGDAL OTPC configuration enables precise three-dimensional reconstruction of electron and nuclear recoil ionisation tracks and the use of low-pressure gas allows for the reconstruction of electron tracks down to 5 keV. The design of the experiment will be presented along with the results from end-to-end detailed simulations and estimates of signal and background yields, as well as the current status of activities at the Rutherford Appleton Laboratory's Neutron Irradiation Laboratory for Electronics (NILE), where the experiment will be hosted.
The quest for Dark Matter (DM) and its nature has been puzzling scientists for nearly a century. This puzzle has engendered theories that span nearly hundred orders of magnitude in mass scales with widely contrasting nature. It has also motivated decades of experimental efforts correspondingly different in the wide variety of their target masses, observables, technologies and interpretations. The last two decades have seen no less than twenty experiments designed to directly detect the Weakly Interacting Massive Particle (WIMP) paradigm of DM alone. Their sensitivities span five orders of magnitude and use Ionization, Scintillation, Heat, Sound, Images and several combinations of these as their detection methods. In addition, WIMPS are also searched for at Indirect Detection and Collider experiments. This labyrinth of theories and experiments make their analyses and combination a daunting task. The Dark Matter Data Center (DMDC) is an ORIGINS Excellence Cluster initiative, supported by the Max Planck Computation and Data Facility (MPCDF). It aims at bringing together the large amount of recorded data and theories in a unified platform, making it easily accessible for the DM community. It offers a repository where data, methods and code are clearly presented in a unified interface for comparison, reproduction, combination and analysis. The DMDC is a forum where Experimental Collaborations can directly publish their data and Phenomenologists the implementation of their models, in accordance to Open Science principles. Alongside the repositories, it also offers easy online visualization of the hosted data. It offers an online simulation of signal predictions for experiments using model data supplied by the users, all in a friendly web-based GUI. The DMDC also hosts guidance tools from the Collaborations illustrating the usage and analysis of their data through Binders that run online and support all popular programming platforms. It hosts a continuously growing compendium of ready-to-use, copy-pastable code examples for inference and simulations. It can also provide support and computational power for comparison of model and experimental observations as well as the combination of these results using modern and robust statistical tools through similar Binders. We are already online with more databases and features being added continuously! Find us at https://www.origins-cluster.de/odsl/dark-matter-data-center
The Beam Dump Experiment (BDX) at Jefferson Laboratory (JLAB) is electron-beam thick-target experiment to search for Light Dark Matter (LDM) particles in the MeV-GeV mass range. BDX will exploit the high-intensity 10.6 GeV e$^-$ beam from CEBAF accelerator impinging on the beam dump of experimental Hall-A, collecting up to 10$^{22}$ electrons-on-target in a few years time. Any LDM particles produced by the interaction of the primary e$^-$ beam with the beam dump will be detected by measuring their scattering inside a detector, which is to be installed in a dedicated underground facility, located 20 m downstream. The space between the beam dump and the detector will be filled with heavy shielding to suppress the high-energy component of the beam-related backgrounds. The BDX detector consists of a CsI(Tl) electromagnetic calorimeter (ECAL) surrounded by a hermetic veto system. The expected signature for the LDM interaction in the ECAL is an O(100 MeV) electromagnetic shower with no activity in the surrounding active veto counters.
After an intense phase of R&D studies and simulations with on-site background measurements conducted to validate the corresponding results, the BDX proposal received full approval by the 2019 JLab Program Advisory Committee. The collaboration is actively working on the design of the new experimental facility for housing the experiment.
A small-scale version of the full experiment, BDX-MINI, has been built and operated with a lower energy beam, where the existing soil between the beam dump and the detector provided adequate shielding from known particles produced in the beam dump. The BDX-mini detector, installed in a well located 22 m downstream of the Hall-A beam dump, consisted of a PbWO$_4$ electromagnetic calorimeter, surrounded by a layer of tungsten shielding and two hermetic plastic scintillator veto systems. Despite the small interaction volume, the large accumulated charge of 2.2$\cdot 10^{21}$ EOT allowed for the BDX-mini measurement to set competitive exclusion limits on the LDM parameters space, comparable to those reported by larger-scale efforts.
In this talk, after a brief introduction to the LDM physics case, we will present an overview of BDX, discussing the main items of the R&D and design phase of the experiment. We will then show the results obtained from the BDX-mini experiment, focusing on few key aspects of the associated experimental campaign and data analysis effort.
The FASER experiment is a new small and inexpensive experiment that is being placed 480 meters downstream of the ATLAS experiment at the CERN LHC. FASER is designed to discover dark photons and other light and very weakly-interacting particles that are produced in the far-forward region, outside of the ATLAS detector acceptance. The experiment has been successfully constructed and installed and will take data during Run-3 of the LHC. This talk will present the physics prospects, detector design, and commissioning status of FASER.
NUSES is a space mission project promoted by the Gran Sasso Science Institute (GSSI) in collaboration with Thales Alenia Space Italy and the Italian National Institute for Nuclear Physics (INFN) with the aim at investigating cosmic radiation, astrophysical neutrinos, Sun-Earth environment, space weather and possible signals of magnetosphere-ionosphere-lithosphere coupling (MILC) phenomena.
Beside its wide scientific program, the NUSES mission will be a technological pathfinder for the development and test of innovative technologies and observational strategies for future missions.
The NUSES satellite bus will host two payloads: TERZINA and ZIREโ.
The first will consist of a compact optical Cherenkov telescope based on the
use of state-of-art of Silicon Photomultipliers (SiPMs) for the detection of astrophysical neutrinos interacting with the Earth atmosphere and generating upgoing extensive air showers.
TERZINA will be also instrumental for the characterization of the Cherenkov signals due to cosmic ray
induced showers and of the night-sky background. The second payload, ZIREโ, will be tailored to provide measurements of the flux intensity of electrons, protons and light cosmic ray nuclei with energies up to several hundreds of MeV.
ZIREโ will be also equipped with an innovative X and gamma ray telescope prototype, designed to be operative in the MeV energy range.
In this talk, the status of the NUSES project design will be discussed along with the scientific and technological objectives of the mission.
The proposed ECCE detector at the future Electron-Ion-Collider (EIC) at Brookhaven National Laboratory is a physics-driven design concept, meeting and exceeding the EIC physics program requirements.
To gain further insights on the partonic structure of the nucleon, jets in the hadron-going (forward) direction provide an excellent probe.
They provide a strong handle on parton kinematics in e-p and e-A collisions and their internal structure can further advance our understanding of the complex hadronization process as well as basic principles of QCD.
Thus, ECCE features highly granular electromagnetic and hadronic calorimetry, as well as high resolution tracking and excellent PID detectors to enable detailed studies of jets and their components.
For this, the appropriate mix of novel and established detector technologies have been selected and their performance have been studied in detail.
In this talk, the performance of the forward detectors and the resulting physics capabilities will be presented, with particular focus on the interplay of tracking and calorimetry.
The LUXE experiment aims at studying high-field QED in electron-laser and photon-laser interactions, with the 16.5 GeV electron beam of the European XFEL and a laser beam with power of up to 350 TW. The experiment will measure the spectra of electrons, positrons and photons in expected ranges of $10^{-3}$ to $10^9$ per 1 Hz bunch crossing, depending on the laser power and focus. These measurements have to be performed in the presence of low-energy high radiation-background. To meet these challenges, for high-rate electron and photon fluxes, the experiment will use Cherenkov radiation detectors, scintillator screens, sapphire sensors as well as lead-glass monitors for backscattering off the beam-dump. A four-layer silicon-pixel tracker and a compact electromagnetic tungsten calorimeter with GaAs sensors will be used to measure the positron spectra. The layout of the experiment and the expected performance under the harsh radiation conditions will be presented. Beam tests for the Cherenkov detector and the electromagnetic calorimeter were performed at DESY recently and results will be presented. The experiment received a stage 0 critical approvement (CD0) from the DESY management and is in the process of preparing its technical design report (TDR). It is expected to start running in 2024/5.
In recent years, crowdfunding platforms have gained popularity as a way to raise funds for various endeavors. This talk discusses the use of crowdfunding as a non-traditional way to finance physics outreach projects. Such tools can provide much needed flexibility to projects and serve as a platform to spread the word about your project. The talk is based on first-hand experience using such tools and includes a discussion of important advise and common pitfalls.
Following the first โSustainable HEPโ Workshop, hosted virtually by CERN 28 โ 30 June 2021, members of our community have been compiling a white paper that aims to raise awareness of the environmental impact of our work, to suggest positive changes that we can make to our working practices and the infrastructure upon which they rely, and to identify the implications that these changes could have for social justice. This talk will present the first version of this document, summarise its key recommendations, and discuss the possibilities for its future development.
When and where it is convenient to start working on a path for raising awareness on gender issues? Our answer isโฆearly! The high school is definitely a good start! And if the need is in the schools we, the Italian (CNR -Italian National Research Counci- and INFN -Italian National Institute for Nuclear Physics) component of the โGENERA-networkโ decided to go there.
We act for a few years already, as messenger of this certainty by promoting a school competition devoted to a consideration of the role of women in science and particularly in Physics. Our idea is that outreach activities can have the role of raising the awareness, the knowledge through the active involvement of students and that this is the way to change the culture, to remove the stereotypes. The aim is that new and more aware generations have the chance to make choices more appropriate and based on their real skills and their aspirations, without being influenced by hidden prejudices and stereotypes still able to obstruct the choice of STEM faculties by the girls. In these years we organized 3 competitions, with 226 videos, more than 100 schools and a thousand of students involved.
The students have been asked to produce a video on three main subjects: work on stereotypes and prejudices, well established in the social and cultural background, that condition choices of new generations and weigh especially on the role and image of women in the field of STEM subjects and scientific research; encourage young women to undertake a career in the world of science; know the character of women researchers and deepen aspects of their private and professional life, pointing out the important and often different contribution of women to scientific progress.
The students had the opportunity to produce videos ranging over a wide variety of disciplines and competencies, from the story of woman scientist included in history of their countries, to the use of English language, to the movie production, to the physics itself, and more.
For the organizers, all the competitions have been amazing experiences: watching videos and evaluating them breathing the same air of these youngsters who become actors, directors, writers and interviewer sharing their dreams and enthusiasm, but also perceiving sometimes their fear of disappointing expectations. This experience underlines as young generation can be the real driving force behind a cultural change in which women can always express an interest in science, a confidence in their scientific abilities, and ultimately decide to pursue scientific careers, as men to pursue human science careers. At the same time, we as researchers have to promote this cultural change, from ourselves, also using our outreach activities in a different way as in this case. Scope of the competition is the transition to an environment for learning, teaching and research in physics that is equally attractive and supportive to all genders, at each stage of their education and career path.
In this framework, GENERA Network, https://www.genera-network.eu/, originated from the EU-funded GENERA project, combining physicists and sociologists, has the scope to coordinate and improve gender equality policies in physics research organizations in Europe and world-wide.
High Energy Physics is strongly aligned with cleverness and masculinity. โNot surprisingly, physics does an extremely good job at keeping people outโ (Anna Danielsson 2022). Therefore, inviting women and minorities is not enough. We need to understand and overcome the gendered, classed and raced politics of knowledge-producing processes in STEM. In this talk research results of mainly qualitative studies will be presented. We reflect on the power of norms and exclusions in the culture, representation, and teaching of physics. We look, for example, at communications in research labs, educational settings at universities, physicistsโ behavior at conferences, and contents of physics textbooks. In addition, we discuss strategies to value and welcome diversity and equity in HEP.
In recent decades much attention has been given to womenโs underrepresentation in science and engineering (SE), particularly in positions of leadership.
In this talk dominant theories about women's underrepresentation in SE are reviewed, in light of evidence. New theories are then presented together with examples of new research questions and findings. The talk concludes with a discussion of the implications of the research evidence in terms of womenโs interest and participation in SE education and professions, and womenโs advancement to SE leadership roles.
Research institutions and organisations are paying increasing attention to diversity and inclusion.
In the talk we will discuss why diversity and inclusion targets should not be detached from actions to increase equal opportunities for all.
Panel discussion with all speakers
A significant goal of high-energy nuclear collisions is to determine the Quantum Chromodynamics (QCD) phase diagram for the strongly interacting matter. The most experimentally accessible way to characterize the QCD phase diagram is to scan in temperature (T) and the baryon chemical potential (\mu_B). The hadronic matter exists in a state where the fundamental constituents, quarks and gluons, are confined in composite particles. At high energy densities, QCD predicts a phase transition from a hadronic gas to a state of deconfined matter - the quark-gluon plasma (QGP). In hot and dense state QCD matter is melted into their constituent quarks, and the strong interaction becomes dominant. In addition, a chiral phase transition is predicted. QCD-based models predict a first-order phase transition and the existence of a critical point (CP) at higher \mu_B. However, the exact locations of the first-order phase transition and the CP are still unknown. Experiments at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) have provided compelling evidence of the formation of a QGP matter close to \mu_B = 0. In order to study the QCD phase structure experimentally as a function of T and \mu_B, the Beam Energy Scan (BES) program at RHIC was proposed. Several collision energies are used to create systems described by various initial coordinates of T and \mu_B. The experimental goals of the BES program are the following: search for threshold energies for the QGP signatures, search for signatures of a first-order phase transition, search for a CP, and search for possible signatures of chiral symmetry restoration. In this talk we will present the current status of the BES program at the STAR experiment.
Heavy-ion collisions are a powerful device to probe the phase diagram of the strongly interacting matter. An issue of special interest is the transition between hadronic gas and quark-gluon plasma, especially the possible presence of the critical point. One of the methods of the critical point search is analyzing fluctuations and correlations of produced particles. An increase in the fluctuation signal is expected in the presence of the critical point. Placed at the CERN SPS, NA61/SHINE is a fixed-target experiment, performing a two-dimensional scan by colliding different systems (p+p, Be+Be, Ar+Sc, Xe+La, Pb+Pb) at different center-of-mass energies (5.1 - 16.8/17.3 GeV per nucleon pair). In this contribution, the latest results from NA61/SHINE, regarding intensive quantities of multiplicity and net-charge in p+p and ion+ion interactions and its comparison with model predictions, will be shown.
We present the measurement of two-particle angular correlations in hadronic $e^{+}e^{โ}$ collisions data collected by the Belle detector at KEKB. Two high-statistics datasets, with center-of-mass energies $\sqrt{s} = 10.52$ GeV ($89.5 ~\mathrm{fb}^{โ1}$) and $10.58$ GeV on the $\Upsilon(4S)$ resonance ($333.2~\mathrm{fb}^{โ1}$), are analyzed. In various heavy-ion and proton-proton collisions, โridgelike signalsโ were reported in the correlation function analysis. The physical origin of the flow-like signals is still under debate. This enhancement is a phenomenon in that charged track pairs tend to have small azimuthal angle differences, and the azimuthal correlation extends to phase space with large pseudorapidity differences between the two tracks. The study of the clean $e^{+}e^{โ}$ collision system can be an important test for theories attributing ridgelike correlations to initial-state effects. The results are compared with Monte Carlo simulations to give qualitative physical understandings. Moreover, the measurement constraints phenomenological models of parton fragmentation in the low-energy regime. The study on the resonance $b\bar{b}$ bound state decays also sheds light on the formation of particular correlation structures.
The first measurement of two-particle angular correlations of charged particles emitted in high energy $e^+e^-$ annihilation up to $\sqrt{s}=$ 209 GeV is presented. The archived hadronic $e^+e^-$ scattering data at a center-of-mass energy of 91-209 GeV were collected with the ALEPH detector at LEP between 1992 and 2000. The correlation functions are measured over a broad range of pseudorapidity and full azimuth as a function of charged particle multiplicity for the first time with LEP2 data. At 91 GeV, no significant long-range correlation is observed in either the lab coordinate analysis or the thrust coordinate analysis, where the latter is sensitive to a medium expanding transverse to the color string between the outgoing $q\bar{q}$ pair from Z boson decays. The associated yield distributions in both analyses are in better agreement with the prediction from the PYTHIA v6.1 event generator than from HERWIG v7.1.5. These results provide new insights to showering and hadronization modeling. They also serve as an important reference to the observed long-range correlation in proton-proton, proton-nucleus, and nucleus-nucleus collisions. Results with $e^+e^-$ data at higher collision energy than 91 GeV will also be presented.
This data set provides higher event multiplicity reach up to around 50 and a chance to sample different underlying hard scattering processes. Studies of the high energy annihilation data will expand our search for collective phenomena in $e^+e^-$ collisions to a new phase space for a potential discovery.
Collective behaviour of final-state hadrons, and multiparton interactions are studied in high-multiplicity ep scattering at a centre-of-mass energy $\sqrt{s}$ = 318 GeV with the ZEUS detector at HERA. Two- and four-particle azimuthal correlations, as well as multiplicity, transverse momentum, and pseudorapidity distributions for charged- particle multiplicities $N_{\rm ch}$ โฅ 20 are measured. The dependence of two-particle correlations on the virtuality of the exchanged photon shows a clear transition from photoproduction to neutral current deep inelastic scattering. For the multiplicities studied, neither the measurements in photoproduction processes nor those in neutral current deep inelastic scattering indicate significant collective behaviour of the kind observed in high-multiplicity hadronic collisions at RHIC and the LHC. Comparisons of PYTHIA predictions with the measurements in photoproduction strongly indicate the presence of multiparton interactions from hadronic fluctuations of the exchanged photon.
A new opportunity for a possible neutrino flagship experiment in Europe opens by exploiting a unique opportunity that has long been hidden in the Chooz site โ Europeโs historical and most powerful reactor neutrino science site. The โSuperChoozโ project benefits by the existence of 2 caverns, formerly hosting the Chooz-A nuclear reactor complex, built in the 60โs. The Chooz-A caverns are becoming vacant upon its dismantling completion. They hold a total volume of up to 50 000m^3, thus directly comparable to the size of SuperKamiokande detector (Japan). Its potential use for fundamental science is therefore under active discussion with EDF, thus starting the pathfinder exploration era. The SuperChooz caverns combined with the existing ~1km baseline of the most powerful 2x N4 Chooz PWR nuclear reactors make this site a unique asset world-wide. Experimentally, the remaining challenge is the poor overburden (order 100m of rock underground). However, the novel LiquidO technology, born as a byproduct of Double Chooz experiment in the same site, heralds the potential for unprecedented active background rejection of up to 2 orders of magnitude, thus providing feasibility potential ground for the considering of a hypothetical SuperChooz experiment. The rationale of the experiment will be highlighted in the talk for the first time โ first official released. The project is aimed to address some of the most fundamental symmetries (studies under completion) behind the Standard Model, including a design that may open for key synergies that may boost the sensitivities of other neutrino flagship experiments such as DUNE (US), JUNO (China) and HyperKamiokande (Japan).
Reduction of Tl-208 backgrounds for Zr-96 neutrinoless double beta decay experiment using topological information of Cherenkov light
ZICOS is a future experiment for neutrinoless double beta decay using $^{96}$Zr nuclei. In order to achieve sensitivity over $10^{27}$ years, ZICOS will use tons of $^{96}$Zr, and need to remove $^{208}$Tl backgrounds as observed by KamLAND-Zen one order of magnitude. For this purpose, we have developed new technique to distinguish the signal and background using topology of Cherenkov light. We have measured directly this topology using HUNI-ZICOS detector and the results clearly indicated topology as effective even 1MeV electron. We have also developed the pulse shape discrimination for the extraction of PMT which receives Cherenkov lights in the liquid scintillator. In order to confirm above technique, we demonstrated beta-gamma events such as $^{208}$Tl beta decay scheme using $^{60}$Co source with UNI-ZICOS detector.
Here we will report the current status and some results obtained by recent measurement, and will also explain a plan to measure the half life of two neutrino double beta decay for $^{96}$Zr nuclei.
A core-collapse supernova (SN) offers an excellent astrophysical laboratory to test non-zero neutrino magnetic moments. In particular, the neutronization burst phase, which lasts for few tens of milliseconds post-bounce, is dominated by electron neutrinos and can offer exceptional discovery potential for transition magnetic moments. We simulate the neutrino spectra from the burst phase in forthcoming neutrino experiments like the Deep Underground Neutrino Experiment (DUNE), and the Hyper-Kamiokande (HK), by taking into account spin-flavour conversions of SN neutrinos, caused by interactions with ambient magnetic fields. We find that the neutrino transition magnetic moments which can be explored by these experiments for a galactic SN are an order to several orders of magnitude better than the current terrestrial and astrophysical limits. Additionally, we also discuss how this realization might shed light on three important neutrino properties: (a) the Dirac/Majorana nature, (b) the neutrino mass ordering, and (c) the neutrino mass-generation mechanism.
The MoEDAL experiment deployed at IP8 on the LHC ring was the first dedicated search experiment to take data at the LHC in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, and massive slowly moving charged particles in p-p and heavy-ion collisions. The MoEDAL detector will be reinstalled for LHCโs Run-3 to continue the search for electrically and magnetically charged HIPs.
An important upgrade to MoEDAL, the MoEDAL Apparatus for Penetrating Particles (MAPP), approved by CERNโs Research Board in December 2021, is now the LHCโs newest detector. The MAPP detector, positioned in UA83, expands the physics reach of MoEDAL to include sensitivity to feebly-charged particles with charge, or effective charge, as low as 10-3 e (where e is the electron charge). Also, the MAPP detector In conjunction with MoEDALโs trapping detector gives us a unique sensitivity to extremely long-lived charged particles. MAPP also has some sensitivity to long-lived neutral particles.
In this talk we will describe the design, construction and installation of the MAPP detector as well as briefly touch on the physics reach of this apparatus. Additionally, we will very briefly report on the plans for the MAPP-2 upgrade to the MoEDAL-MAPP experiment for the High Luminosity LHC (HL-LHC). We envisage that this detector will be deployed in the UGC1 gallery near to IP8. This phase of the experiment is designed to maximize MoEDAL-MAPPโs sensitivity to very long-lived neutral messengers of physics beyond the Standard Model.
A variety of detectors have been developed in both accelerator-based and non-accelerator-based experiments for the study of positronium physics beyond the standard model reported in early 2000 after solving the lifetime-puzzle of positronium. The KNU Advanced Positronium Annihilation Experiment (KAPAE) was constructed to study rare decay of positronium, search for QED violation of C, CP and CPT as well as new particle searches. The KAPAE detector consists of 200 BGO scintillators that are finely fragmented in a compact size that is different from the previously reported detectors. Signal acquisition is triggered by a positron signal in the newly proposed trigger part and then read out with 392 channels of SiPM.We show the performance of the assembled detector and the potential for positronium physics studies. Furthermore, we report the results of sub experiments on performance improvements and upgrades that may have advantages in positronium physics studies.
A key focus of the physics program at the LHC is the study of head-on proton-proton collisions. However, an important class of physics can be studied for cases where the protons narrowly miss one another and remain intact. In such cases the electromagnetic fields surrounding the protons can interact producing high energy photon-photon collisions, for example. Alternatively, interactions mediated by the strong force can also result in intact forward scattered protons, providing probes of quantum chromodynamics (QCD).
In order to aid identification and provide unique information about these rare interactions, instrumentation to detect and measure protons scattered through very small angles is installed in the beam-pipe far downstream of the interaction point. We describe the ATLAS Forward Proton `Roman Pot' Detectors (AFP and ALFA), including their performance to date and expectations for the upcoming LHC Run 3, covering Tracking and Time-of-Flight Detectors as well as the associated electronics, trigger, readout, detector control and data quality monitoring. The physics interest, beam optics and detector options for extension of the programme into the High-Luminosity LHC (HL-LHC) era are also discussed.
Fast timing detectors have become more and more important for high energy physics and other technological application, with their development being crucial for several aspect of the High Luminosity LHC program. The CMS Proton Precision Spectrometer (PPS), operating at the LHC, makes use of 3D silicon tracking stations to measure the kinematics of protons scattered in the very forward region, as well as timing detectors based on planar single crystal CVD diamond to measure the proton time-of-flight with high precision. The time information is used to reconstruct the longitudinal position of the proton interaction vertex and to suppress pile-up background. Special movable vacuum chambers placed in the LHC beam pipe, the Roman Pot, allow the PPS detector hosted inside to be be moved close to the circulating beams. A novel architecture with two diamond sensors read out in parallel by the same electronic channel had been used to enhance the timing performance of the detector. A dedicated amplification and readout chain had been developed to sustain particle fluency of ~1 MHz/channel. The PPS timing detector has operated demonstrating its capability to reconstruct the interaction vertex and to be used to suppress pile-up background. In Run 2 detectors were exposed to a highly non-uniform irradiation, with local peaks above $10^{16}neq/cm^2$, a similar value is expected in the future in Run 3. LHC data and subsequent test beam results show that the observable radiation damage only led to a moderate decrease of the detector timing performance. After a desciption of the PPS detector, the performance in Run2 will be reported, inclusive of the recent studies of radiation effects. The timing system has been upgraded and new detectors packages are currently being installed, with the goal of reaching an ultimate timing resolution of better than 30 ps on protons in the TeV energy range.
The Precision Proton Spectrometer (PPS) started operating in 2016 and has collected more than 110 fbโ1 of data over the course of the LHC Run 2, now fully available for physics analysis. The talk will discuss the key features of PPS alignment and optics calibrations developed from scratch. The reconstructed proton distributions, performance of the PPS simulation and finally validation of the full reconstruction chain with physics data (dilepton events) will be shown. The upcoming Run 3 will bring new opportunities for measurements with PPS, which will also be discussed.
Flavor-changing neutral current (FCNC) processes such as $b \to s\ell\ell$ do not occur at tree-level in the Standard Model (SM), which makes them sensitive probes of physics beyond the standard model (BSM). Intriguing BSM hints in $b \to s$ transitions have been observed by multiple flavor physics experiments, LHCb, Belle, BaBar, and Belle II. We have upgraded the event generator EvtGen to model $B\to K^* \ell^+ \ell^-$ with improved SM decay amplitudes and amplitudes for possible BSM physics contributions, implemented in the operator product expansion in terms of Wilson coefficients; this upgraded event generator can then be used to investigate the experimental sensitivity to the most general BSM signal resulting from dimension-six operators with properly simulated BSM scenarios, interference between SM and BSM amplitudes, resonance effects, and correlations between different BSM observables as well as acceptance bias. We demonstrate the prospects for improved measurements with $B\to K^* \ell \ell$ decays from the expected 50 ab$^{-1}$ dataset of the Belle II experiment with a four-dimensional unbinned maximum likelihood fit. We show that $\Delta$-observables mitigate uncertainties due to QCD effects and appear ideally suited for Belle II with the large data sets expected in the next decade. Belle II also has excellent sensitivity to New Physics (NP) in the Wilson coefficients $C_7$ and $C_7'$, which appear at low $q^2$ in the di-electron channel.
Flavour physics represents a unique test bench for the Standard Model (SM). New analyses performed at the LHC experiments are now providing unprecedented insights into CKM metrology and new results for rare decays. The CKM picture can provide very precise SM predictions through global analyses. In addition, the Unitarity Triangle analysis can also be exploited to constrain the parameter space in possible New Physics (NP) scenarios using a model-independent parametrisation to combine all of the available experimental and theoretical information on ฮ๐น=2 processes . We present here the results of the latest global SM and NP analyses performed by the UTfit collaboration including all the most updated inputs from experiments, lattice QCD and phenomenological calculations.
The study of high-$p_T$ tails at the LHC can be a complementary probe to low-energy observables when investigating the flavour structure of the Standard Model and its extensions.
Motivated by the $B$ anomalies, we study the interplay between low-energy observables and both charged and neutral current Drell-Yan measurements, and their implications for semileptonic interactions.
The Mathematica package "HighPT" allows to do so within a unified and consistent framework, yielding a likelihood function that includes not only high-$p_T$ and flavour observables, but the EW pole and Higgs observables as well, thus allowing to perform combined fits easily.
We discuss such combined analyses in the Effective Field Theory approach.
The Drell-Yan processes $pp\to\ell\nu$ and $pp\to\ell\ell$ at high transverse momentum can provide important probes of semileptonic transitions relevant to flavor physics and complementary to the commonly used low energy observables. We parametrize generic New Physics (NP) contributions to these processes and derive the corresponding bounds by recasting the latest ATLAS and CMS (run 2) searches for mono- and di-lepton resonances. We focus in particular on the validity of the Effective Field Theory (EFT) approach in this regime by comparing the limits obtained for specific tree-level mediators and their EFT equivalents. Analyses presented in this talk are performed using {\tt HighPT}, a new {\tt Mathematica} package for automatic extraction of high-$p_T$ bounds.
The abundant production of beauty and charm hadrons in the $5\cdot 10^{12}$ Z boson decays expected at FCC-ee offers outstanding opportunities in flavour physics that exceed those available at Belle II by a factor of twenty, and are complementary to the LHC heavy-flavour programme. A wide range of measurements will be possible in heavy-flavour spectroscopy, rare decays of heavy-flavoured particles and CP-violation studies, which will benefit from the low-background experimental environment, the high Lorentz boost, and the availability of the full spectrum of hadron species.ย The huge data samples of the Tera-Z phase opens also the possibility of much improved determinations ofย tau-lepton properties -- lifetime, leptonic and hadronic widths, and mass -- allowing for important tests of lepton universality. In addition, via the measurement of the tau polarisation, FCC-ee can access a precise determination of the neutral-current couplings of electrons and taus. These measurements present strong experimental challenges to match as far as possible statistical uncertainties, $O(10^{-5})$, raising strict detector requirements.ย This contribution will present an overview of the broad potential of the FCC-ee flavor physics program and also some preliminary results from recent analyses.
Most of the analyses of rare meson decays in the literature assume that neutrinos are Dirac particles and consequently do not consider the possibility of lepton number violating interactions. I will present efficient strategies that would allow experimental collaborations in the future to give us insights whether footprints of Majorana neutrinos might be present in their data.
We discuss a strategy to study non-perturbatively QCD up to very high temperatures by Monte Carlo simulations on the lattice. It allows to investigate not only the thermodynamic properties of the theory but also other interesting thermal features. As a first concrete application, we compute the flavour non-singlet meson screening masses and we present the results of Monte Carlo simulations at 12 temperatures covering the range from T โผ 1 GeV up to โผ 160 GeV in the theory with three massless quarks. On the one side, chiral symmetry restoration manifests itself in our results through the degeneracy of the vector and the axial vector channels and of the scalar and the pseudoscalar ones, and, on the other side, we observe a clear splitting between the vector and the pseudoscalar screening masses up to the highest investigated temperature. A comparison with the high-temperature effective theory shows that the known 1-loop order in the perturbative expansion does not provide a satisfactory description of the non-perturbative data up to the highest temperature considered.
The $_{\Lambda}^{3}\rm H$ is a bound state of proton (p), neutron (n) and $\Lambda$. Studying its characteristics provides insights about the strong interaction between the $\Lambda$ and ordinary nucleons. In particular, the $_{\Lambda}^{3}\rm H$ is an extremely loosely bound object, with a large wavefunction. As a consequence, the measured (anti-)$_{\Lambda}^{3}\rm H$ production yields in pp and p-Pb collisions are extremely sensitive to the nucleosynthesis models. Thanks to the very large set of pp, p-Pb and Pb-Pb collisions collected during Run 2 of the LHC the ALICE collaboration has performed systematic studies on the $_{\Lambda}^{3}\rm H$ lifetime, binding energy and production across different collision systems. The new ALICE results on hypertriton properties have a precision which is comparable with the current world averages and they can be used to constrain the state-of-the-art calculations which describe the $_{\Lambda}^{3}\rm H$ internal structure. Furthermore, with the precision of the presented production measurements some configurations of the Statistical Hadronisation and Coalescence models can be excluded leading to tighter constraints to available theoretical models.
In the journey to explore the strong interaction among hadrons, ALICE has for the first time flared out its femtoscopic studies to nuclei. The large data sample of high-multiplicity pp collisions at $\sqrt{s}$ = 13 TeV allows the measurement of the proton-deuteron (p-d) and the hyperon-deuteron ($\Lambda$-d) momentum correlations. The femtoscopic study of these systems opens the door to investigate the formation mechanism of the light nuclei in hadron-hadron collisions.
In this contribution, the measured correlation functions for p-d and $\Lambda$-d are presented and compared to theoretical predictions. In the case of p-d correlations, the data show a shallow depletion at low relative momenta, while the full-fledged model calculations which include all relevant interactions predict a strong repulsive signal. Possible explanations include a late formation of the deuterons leading to the suppression of strong interactions between protons and deuterons. In addition, the
measured $\Lambda$-d correlation is in agreement with hypothesis of no strong interaction due to the late formation of deuterons, supporting the findings in p-d. In general, we demonstrate how correlation functions can be exploited to study the production mechanism of light nuclei at the LHC.
Correlations between charged particles provide important insight about hadronization processes. We present recent results on Bose-Einstein two-particle correlation using ATLAS data at the center-of-mass energy of 13 TeV. Also, if available, the analysis of the momentum difference between charged hadrons in pp, p-lead, and lead-lead collisions of various energies is performed in order to study the dynamics of hadron formation. The spectra of correlated hadron chains are explored and compared to the predictions based on the quantized fragmentation of a three dimensional QCD string (helix).
BESIII has the worldโs largest samples of $J/\psi$ and $\psi(3686)$ events from
$e^+ e^-$annihilations, which offer an ideal and clean laboratory to study light meson
spectroscopy , in particular for the search for QCD exotics. Recent important
achievements in this field, including the observation of a 1-+ state, $\eta_1(1855)$ in
$J/\psi->\gamma \eta \etaโ$, the observation of the $X(2600)$ in $J/\psi->\gamma \etaโ \pi^+\pi^-$,
and the PWA of $J/\psi->\gamma \etaโ \etaโ$ will be highlighted.
The hybrid mesons form a part of the exotic spectrum of the standard model. The recent observation of the isoscalar hybrid, called the $\eta_1(1855)$, provides an important step towards the completion of the $1^{-+}$ nonet. In the present work, we analyze the masses and two-body decays of the members of this nonet using a model Lagrangian. The isovector $\pi_1(1600)$ has been studied extensively - both experimentally and on lattice. We use the available experimental and lattice data to extract the coupling constants. Using these parameters, we analyze the possible decay channels for the hybrid kaons and the isoscalars. We find that the hybrid kaons have to be at least as broad as the $\pi_1(1600)$. We expect the isoscalars to mix only to a small extent. The mass and the total width of the heavy isoscalar can be identified with those of the $\eta_1(1855)$ state reported by the BESIII collaboration if the mixing angle is taken to be small but non-zero. The light isoscalar, on the other hand, can be marginally lighter than the $\pi_1(1600)$ and significantly narrow.
The study of exotic mesons such as gluonic hybrids gives a greater insight into how quarks and gluons bind to form such states and hence increase our understanding of the fundamental strong force. Furthermore, the double pion photoproduction is known as a ideal tool for the investigation of nucleon resonances, especially the exotic meson states. Hereby, to study the interference of meson resonance production and meson-baryon rescattering effects, we focus on the reaction $\gamma p \rightarrow \pi^+ \pi^- p$. Aiming at the description of the latest data collected at CLAS12 and GlueX experiments, we used Deck model with a virtual pion exchange to generalize the moment extraction formalism with a linearly polarized photons. We compute the moments of the $\pi^+$ $\pi^-$ angular distribution with $E_\gamma= 8.5$ $GeV^2$ for $L=0,1,2,3,4$ in the helicity frame i.e the rest frame of the $\pi\pi$ with the direction opposite to the recoil nucleon defining the $z$ axis. The importance of such moments is that one can use them to calculate the beam asymmetry function.
https://indico.cern.ch/event/1161312/
Gala dinner