ICHEP is a series of international conferences organized by the C11 commission of the International Union of Pure and Applied Physics (IUPAP). It has been held every two years for more than 50 years, and is the reference conference of particle physics where most relevant results are presented. At ICHEP, physicists from around the world gather to share the latest advancements in particle physics, astrophysics/cosmology, and accelerator science and discuss plans for major future facilities.
Future studies with high-precision on fundamental interactions developed on lepton colliders require high-intensity and low-emittance positron sources. Such sources are needed for e+e- and also μ+μ- (generated with positrons) facilities. The availability of powerful positron sources is, therefore, very important. In this context, positron sources providing higher yields, better emittance and reliability than the SLC source are needed. Improvements in conventional positron sources using high-intensity incident electrons on thick metallic targets are meeting some limits due to the important energy deposited in the target with a high-energy deposition density associated to the required small beam sizes. Innovative solutions using the channeling radiation of electrons in axially aligned crystals are providing high photon yields which, in turn, can provide high positron production rate in an associated amorphous target. Such system composed by a crystal-radiator and an amorphous-converter is known as an hybrid positron source. For linear colliders, involving high incident electron intensities, a bending sweeping magnet put between the two targets to sweep off the charged particles created in the crystal, allows the mitigation of the deposited energy in the converter. For the circular e+e- colliders using more moderate intensities, the use of the sweeping magnet in the hybrid source can be omitted. Both options will be presented together with the simulations of the photon and positron production. In this framework, a study of the radiation emitted by a high-quality tungsten crystal using the DESY beamtest facility T21 will be presented and discussed.
Fermilab is considering several concepts for a future 2.4~MW upgrade for DUNE/LBNF, featuring linac extensions of the PIP-II linac and the construction of a new rapid-cycling-synchrotron and/or accumulation rings. This talk will summarize the relationship between these scenarios, emphasizing the commonalities and tracing the differences to their original design questions. In addition to a high-level summary of the two 2.4~MW upgrade scenarios, there is a brief discussion of staging, beamline capabilities, subsequent upgrades, and needed R&D.
The Muon g-2 Experiment at Fermilab has recently measured the muon magnetic anomaly with 460 parts per billion precision. This result is consistent with the measurement from the previous BNL experiment, and the combined Fermilab-BNL value deviates from the most recent Standard Model calculation provided by the Muon g-2 Theory Initiative at the level of 4.2 standard deviations. The muon anomaly is determined by measuring muon spin precession, relative to the muon momentum, inside of a ~7m-radius storage ring which has a very uniform and precisely measured magnetic field. It is necessary to quantify and correct for the effects from storage ring beam and spin dynamics, so as to achieve the required experimental precision for probing the Standard Model. This talk will give an overview of the beam dynamics corrections that are required for the Fermilab measurement.
Recently the Muon g-2 collaboration published the most precise measurement of the anomalous magnetic moment of the muon, $a_\mu$, with a 460 ppb uncertainty based on the Run 1 data. The measurement principle is based on a clock comparison between the anomalous spin precession frequency of spin-polarized muons and a high-precision measurement of the magnetic field environment using nuclear magnetic resonance (NMR) techniques, expressed by the (free) proton spin precession frequency. To achieve the ultimate goal of a 140 ppb uncertainty on $a_\mu$, the magnetic field in the storage region of the muons needs to be known with a total uncertainty of less than 70 ppb. Three devices are used to measure and calibrate the magnetic field in the Muon g-2 storage ring: (a) an absolute calibrated NMR probe, (b) a movable array of NMR probes that can be pulled through the storage region of the muons and (c) a set of NMR probes in the vicinity of the storage region. In this talk, we present the measurement and tracking principle of the magnetic field and point out improvements implemented for the analysis of the data recorded in Run 2 and Run 3.
The reduction of random motion in particle beams, known as beam cooling, has dramatically extended the science reach of many accelerator facilities, with applications ranging from high-energy colliders to the accumulation of antimatter for tests of CPT symmetry and gravity. One of the primary research frontiers in beam cooling is the realization of advanced cooling concepts that have system bandwidths of tens to hundreds of terahertz and achievable cooling rates that exceed the state of the art by up to four orders of magnitude. Here we describe the successful experimental validation of Optical Stochastic Cooling (OSC), which constitutes the first demonstration of any advanced cooling concept. This demonstration is part of a broader advanced beam-cooling research program at Fermilab that also includes high-energy electron cooling and future efforts in laser cooling of ions. The OSC method, first proposed nearly three decades ago, is derivative of S. van der Meer’s stochastic cooling (SC), which was instrumental in the discovery of the W and Z bosons at CERN and the top quark at Fermilab. In SC, a circulating beam is sampled and corrected (cooled) using microwave pickups and kickers with a bandwidth of a few GHz. OSC replaces these microwave elements with optical-frequency analogs, such as magnetic undulators and optical amplifiers, and uses each particle’s radiation to sense and correct its phase-space errors. The OSC experiment, which was carried out at Fermilab’s Integrable Optics Test Accelerator (IOTA), used 100-MeV electrons, a radiation wavelength of 950 nm and achieved a total damping rate approximately 8-times greater than the natural longitudinal damping rate due to synchrotron radiation. Coupling of the longitudinal and transverse planes enabled simultaneous cooling in all degrees of freedom. The integrated system demonstrated sub-femtosecond stability and a bandwidth of ~20 THz, a factor of ~2000-times higher than conventional microwave SC systems. Additionally, detailed experiments were performed demonstrating and characterizing OSC with a single particle in IOTA. This first demonstration of SC at optical frequencies serves as a foundation for more advanced experiments with high-gain optical amplification and advances opportunities for future operational OSC systems at colliders and other accelerator facilities.
The main goal of the Mu2e experiment at Fermilab is to search for indications of charged lepton flavor violation [1]. To achieve this goal, experimenters will be searching for the coherent neutrinoless conversion of a negative muon into an electron in the field of a nucleus by measuring 105-MeV electrons emitted in conversions of negative muons into electrons in the nuclear field of an Al target. This will allow Mu2e to probe effective new physics mass scales up to the $10^{3}–10^{4}$ TeV range. One of the central elements of the Mu2e experimental facility is its target station, where negative pions are generated in interactions of the 8 GeV primary proton beam with a tungsten target, shaped similar to a rod, which will be capable of producing around $3.6\cdot 10^{20}$ stopped negative muons in three years of running [2]. The Mu2e experiment is planned to be extended to a next-generation experiment, Mu2e-II, with a single event sensitivity improved by a factor of 10 or more. Mu2e-II will probe new physics mass scales up to $10^{5}$ TeV by utilizing an 800-MeV 100-kW proton beam. This greater sensitivity is within reach by using the PIP-II accelerator upgrade, a 250-meter-long LINAC capable of accelerating a 2-mA proton beam to a kinetic energy of 800 MeV corresponding to 1.6 MW (the power not used by Mu2e-II will be directed to a neutrino experiment). The higher beam intensity would require a substantially more advanced target design. We are studying a novel conveyor target with tungsten or carbon spherical target elements moved through the beam path. The motion of the elements can be ensured either just mechanically or both mechanically and via He-gas flow. In this talk, we will discuss our recent advances in conceptual design R&D for a Mu2e-II target station based on energy deposition and radiation damage simulations. Our study involves Monte-Carlo codes (MARS15 [3], G4beamline [4], and FLUKA [5]) and thermal and mechanical ANSYS analyses to estimate the stability of the system. The concurrent use of the aforementioned simulation software is intended to allow us to determine and minimize the systematic uncertainty of the simulations. Our simulations allowed us to rule out some other designs (rotated and fixed granular targets) as less practical and supported our assessment of the new target station’s required working parameters and constraints. The thermal and mechanical analyses we performed enabled us to determine the choice of cooling scheme and prospective materials for the conveyor’s spherical elements. We will discuss the first prototype of the Mu2e-II target and its mechanical tests performed at Fermilab that indicated the feasibility of the proposed design and its weaknesses, and we will suggest directions for its further improvement.
References
[1] Bartoszek L, Barnes L, Miller JP, Mott A, Palladino A, Quirk J, et al. Mu2e Technical Design Report. FERMILAB-TM-2594, FERMILAB-DESIGN-2014-01. arXiv:1501.05241 (2014).
[2] Bernstein R. The Mu2e Experiment, Front. in Phys. 7 (2019) 1.
[3] Mokhov NV, James CC. The MARS code System User’s Guide, Version 15 (2016). Fermilab-FN-1058-APC [Preprint] 2017. Available from: https://mars.fnal.gov
[4] Roberts T. G4beamline User’s Guide 3.06, (2018). Available from http://www.muonsinternal.com/muons3/g4beamline/G4beamlineUsersGuide.pdf
[5] Böhlen TT, Cerutti F, Chin MPW, et al. The FLUKA Code: Developments and challenges for high energy and medical applications. Nucl Data Sheets 2014; 120: 211–214.
Beam extraction and collimation in particle accelerators using bent crystals as compact elements capable of efficiently steering particle beams has been investigated at several high-energy hadron accelerators, such as SPS and LHC (CERN, Geneva), Tevatron (Batavia, USA), U70 (Protvino, Russia). Due to technological limitations and a not sufficiently deep understanding of the physics at the base of the interactions between charged particle beams and crystals, this technique has never been applied to electron beams.
Recent innovative experiments carried out at SLAC (Stanford, USA) and MAMI (Mainz, Germany) has raised up the technological readiness level and the understanding of the processes of interaction between crystals and electron beams, highlighting the possibility to use bent crystals to extract electron beams from worldwide spread synchrotrons.
In this contribution we report the first design of a proof-of-principle experiment aiming to use bent crystals as elements to achieve the extraction of 6 GeV electrons circulating in the DESY II Booster Synchrotron. This would be possible exploiting the phenomena of “channeling”: particles of a beam which are channeled between atomic planes of a crystal are forced to travel between atomic axes or planes; mechanically bending of the crystal results in steering of the beam, with an effect equivalent to the one of a magnetic field of few hundred Tesla.
We investigated the experimental setup in detail, though in this report we will focus on its main aspects, such as the particle beam dynamics during the extraction process, the manufacturing and characterization of bent crystals and the detection of the extracted beam.
We conclude that, following a successful proof-of-principle experiment, this technique can be applied at many lepton accelerators existing in the world for nuclear and particle physics detectors and generic detector R&D, as well as in many projects in high-energy physics requiring fixed-target experiments including projects related to lepton colliders.
`Preheating' refers to non-perturbative particle production at the end of cosmic inflation. In many modern inflationary models, this process is predominantly or partly tachyonic, that is, proceeds through a tachyonic instability where the mass-squared of the inflaton field is negative. An example of such a model is Higgs inflation, where the Standard Model Higgs field is the inflaton, formulated in Palatini gravity. The violent dynamics of such a strong instability can lead to strong production of gravitational waves and supermassive dark matter. I discuss the phenomenology of such models and the related CMB predictions.
According to the current experimental data, the SM Higgs vacuum appears to be metastable due to the development of a second, lower ground state in the Higgs potential. Consequently, vacuum decay would induce the nucleation of true vacuum bubbles with catastrophic consequences for our false vacuum Universe. Since such an event would render our Universe incompatible with measurements, we are motivated to study possible stabilising mechanisms in the early universe. In our current investigation, we study the experimentally motivated metastability of the electroweak vacuum in the context of the observationally favoured model of Starobinsky inflation. Following the motivation and techniques from our first study (2011.037633), we wish to obtain similar constraints the Higgs-curvature coupling $\xi$, while treating Starobinsky inflation more rigorously. Thus, we are embedding the SM on the modified gravity scenario $R+R^2$, that introduces Starobinsky inflation naturally, with significant repercussions for the effective Higgs potential in the form of additional negative terms that destabilize the vacuum. Another important aspect lies in the definition for the end of inflation as bubble nucleation is prominent during its very last moments. Our results dictate stronger lower $\xi$-bounds that are very sensitive to the final moments of inflation.
In this talk, I will present a short overview of the connection between particle physics and phase transitions in the early and very early universe. I will then focus on phase transitions during inflation and present recent results on how to use the stochastic spectral expansion to perform phenomenology calculations. I will also talk about the interplay between the electroweak phase transition, new physics at the TeV-scale and experimental constraints.
Bubble nucleation is a key ingredient in a cosmological first order phase transition. The non-equilibrium bubble dynamics and the properties of the transition are controlled by the density perturbations in the hot plasma. We present, for the first time, the full solution of the linearized Boltzmann equation. Our approach, differently from the traditional one based on the fluid approximation, does not rely on any ansatz. We focus on the contributions arising from the top quark species coupled to the Higgs field during a first-order electroweak phase transition. Our results significantly differ from the ones obtained in the fluid approximation with sizeable differences for the friction acting on the bubble wall.
Extensions of the Higgs sector of the Standard Model allow for a rich cosmological history around the electroweak scale. We show that besides the possibility of strong first-order phase transitions, which have been thoroughly studied in the literature, also other important phenomena can occur, like the non-restoration of the electroweak symmetry or the existence of vacua in which the Universe becomes trapped, preventing a transition to the electroweak minimum. Focusing on the next-to-minimal two-Higgs-doublet model (N2HDM) of type II and taking into account the existing theoretical and experimental constraints, we identify the scenarios of electroweak symmetry non-restoration, vacuum trapping and first-order phase transition in the thermal history of the Universe. We analyze these phenomena and in particular their relation to each other, and discuss their connection to the predicted phenomenology of the N2HDM at the LHC. Our analysis demonstrates that the presence of a global electroweak minimum of the scalar potential at zero temperature does not guarantee that the corresponding N2HDM parameter space will be physically viable: the existence of a critical temperature at which the electroweak phase becomes the deepest minimum is not sufficient for a transition to take place, necessitating an analysis of the tunnelling probability to the electroweak minimum for a reliable prediction of the thermal history of the Universe.
We present a simple extension of the Standard Model with three right-handed neutrinos with an additional U(1$)_\text{F}$ abelian flavor symmetry, with a non standard leptonic charge $L_e-L_\mu-L_\tau$ for lepton doublets and arbitrary right-handed charges. We present a see-saw realization of such a scenario. The baryon asymmetry of the Universe is generated via thermal leptogenesis through CP-violating decays of the heavy sterile neutrinos. We present a detailed numerical solution of the relevant Boltzmann equations in two different scenarios: three quasi-degenerate heavy Majorana neutrino masses and a hierarchical mass spectrum.
We study single field slow-roll inflation in the presence of $F(R)$ gravity in the Palatini formulation. In contrast to metric $F(R)$, when rewritten in terms of an auxiliary field and moved to the Einstein frame, Palatini $F(R)$ does not develop a new dynamical degree of freedom. However, it is not possible to solve analytically the constraint equation of the auxiliary field for a general $F(R)$. We propose a method that allows us to circumvent this issue and compute the inflationary observables. We apply this method to test scenarios of the form $F(R) = R + \alpha R^n$ and find that, as in the previously known $n=2$ case, a large $\alpha$ suppresses the tensor-to-scalar ratio $r$. We also find that models with $F(R)$ increasing faster than $R^2$ for large $R$ suffer from numerous problems, with possible implications on the theoretically allowed UV behaviour of such Palatini models. The talk is based on arXiv:2112.12149.
The $(g-2)_{\mu}$ anomaly is a longstanding problem in particle physics and many models are proposed to explain it. Leptoquark (LQ) models can be the solution to this anomaly because of the chiral enhancements. In this talk, we consider the models extended by the LQ and vector-like quark (VLQ) simultaneously. In the minimal LQ models, only the $R_2$ and $S_1$ representations can lead to the chiral enhancements. Here, we find one new $S_3$ solution to the anomaly in the presence of $(X,T,B)_{L,R}$ triplet. We also consider the one LQ and two VLQ extended models. Then, we propose new LQ search channels under the constraints of $(g-2)_{\mu}$. Besides the traditional $t\mu$ decay channel, the LQ can also decay into $T\mu$ final states, which will lead to the characteristic multi-top and multi-muon signals at hadron colliders.
In recent times, several hints of lepton flavour universality violation have been observed in semileptonic B decays, which point towards the existence of New Physics beyond the Standard Model. In this context, we consider a new variant of $U(1)_{L_{\mu}-L_{\tau}}$ gauge extension of Standard Model, containing three additional neutral fermions $N_{e}, N_{\mu}, N_{\tau}$, along with a $(\bar{3},1,1/3)$ scalar Leptoquark (SLQ) and an inert scalar doublet, to study the phenomenology of light dark matter, neutrino mass generation and flavour anomalies on a single platform. The lightest mass eigenstate of the $N_{\mu}, N_{\tau}$ neutral fermions plays the role of dark matter. The light gauge boson associated with $U(1)_{L_\mu-L_\tau}$ gauge group mediates dark to visible sector and helps to obtain the correct relic density. The spin-dependent WIMP-nucleon cross section is obtained in leptoquark portal and is looked up for consistency with CDMSlight bound. Further, we constrain the new model parameters by using the branching ratios of various $b \to sll$ and $b \to s \gamma$ decay processes as well as the lepton flavour non-universality observables $R_{K^{(*)}}$ and then show the implication on the branching ratios of some rare semileptonic $B \to (K^{(*)}, \phi)+$ missing energy processes. The light neutrino mass in this model framework can be generated at one-loop level through radiative mechanism.
I will briefly discuss the signatures and discovery prospects of several new physics models containing dark matter candidates at future lepton colliders. In particular, I will discuss the IDM as well as THDMa. Based on https://arxiv.org/abs/2203.07913
The Inert Doublet Model (IDM) is a simple extension of the Standard Model, introducing an additional Higgs doublet that brings in four new scalar particles. The lightest of the IDM scalars is stable and is a good candidate for a dark matter particle. The potential of discovering the IDM scalars in the experiment at the Compact Linear Collider (CLIC), an e$^+$e$^-$ collider proposed as the next generation infrastructure at CERN, has been tested for two high-energy running stages, at 1.5 TeV and 3 TeV centre-of-mass energy. The CLIC sensitivity to pair-production of the charged IDM scalars was studied using the full detector simulation for selected high-mass IDM benchmark scenarios and the semi-leptonic final state. To extrapolate the results to a wider range of IDM benchmark scenarios, the CLIC detector model in DELPHES was modified to take into account the $\gamma\gamma\to$ had. beam-induced background. Results of the study indicate that heavy charged IDM scalars can be discovered at CLIC for most of the considered benchmark scenarios, up to masses of the order of 1 TeV.
Many scenarios of physics beyond the Standard Model predict dark sectors containing new particles interacting only feebly with ordinary matter. Collider searches for these scenarios have largely focused on identifying signatures of new mediators, leaving much of the dark sector structure unexplored. We investigate the existence of a light dark-matter bound state, the darkonium, $\Upsilon_D$), predicted in minimal dark sector models, which can be produced through the reaction $e^+e^-\to \gamma\Upsilon_D$, with $\Upsilon_D\to A’A’A’$ and the dark photons $A’$ decaying to pair of leptons or pions. This search explores new dark sector parameter space, illustrating the importance of $B$-factories in fully probing low-mass new physics. The results are based on the full data set of about 500 $\text{fb}^{-1}$ collected at the $\Upsilon(4S)$ resonance by the $BABAR$ detector at the PEP-II collider.
A resonant structure has been observed at ATOMKI in the invariant mass of electron-positron pairs, produced after excitation of nuclei such as $^8$Be and $^4$He by means of proton beams. Such a resonant structure can be interpreted as the production of an hypothetical particle (X17) whose mass is around 17 MeV.
The MEG-II experiment at the Paul Scherrer Institut whose primary physics goal is the search for the charged lepton violation process $\mu \rightarrow e \gamma$ is in the position to confirm and study this observation. MEG-II employs a source of protons able to accelerate them up to a kinetic energy of about 1 MeV. This protons are absorbed on a thin target where they excite nuclear transitions to produce photons for the Xenon calorimeter calibration of the MEG-II detector.
By using a new thinner target containing $Li$ atoms the $^7$Li(p,e$^+$e$^-$)$^8$Be process is being studied with a magnetic spectrometer including a cylindrical drift chamber and a system of fast scintillators. This aims to reach a better invariant resolution of previous experiments and to study the production of the X17.
A first dedicated data-taking period in 2022 was conducted where the first internal pair creation events were observed. We report about the first results of the study of the X17 particle.
In this talk, I’ll present results from a global fit of Dirac fermion dark matter (DM) effective field theory using the GAMBIT software. We include operators up to dimension-7 that describe the interactions between gauge-singlet Dirac fermion and Standard Model quarks, gluons, and the photon. Our fit includes the latest constraints from the Planck satellite, direct and indirect detection experiments, and the LHC. For DM mass below 100 GeV, we find that it is impossible to simultaneously satisfy all constraints while maintaining EFT validity at high energies. For higher masses, large regions of parameter space exist where EFT remains valid and reproduces the observed DM abundance.
The stability of particles in the cosmic soup is an important property as it governs their evolution in the cosmos, both on the perturbation and on the background level. In this work, we update the constraints on the decay rate of decaying cold dark matter (DCDM), particularly in the case when decay products are dark and massless or well within the relativistic limit. We further assume, as a base case, that all dark matter is ''decayable''. We then extend the analysis to include the scenario where only a fraction of dark matter can decay. We consider the latest dataset of Planck temperature and polarization measurement with lensing and BAO measurements from SDSS to put significantly tighter constraints on the decay rate, compared to previous work in the same direction.
Dark matter interactions with Standard Model particles can inject energy at early times, altering the standard evolution of the early universe. In particular, this energy injection can perturb the spectrum of the cosmic microwave background (CMB) away from that of a perfect blackbody. For this study, I will discuss recent work to update the DarkHistory code package to more carefully track interactions among low energy electrons, hydrogen atoms, and radiation, in order to accurately compute the evolution of the CMB spectral distortion in the presence of Dark Matter energy injection. I will show results for the contribution to the spectral distortions from redshifts z < 3000 for arbitrary energy injection scenarios.
Relativistic protons and electrons in the extremely powerful jets of blazars may boost via elastic collisions the dark matter particles in the surroundings of the source to high energies. The blazar-boosted dark matter flux at Earth may be sizeable, larger than the flux associated with the analogous process of DM boosted by galactic cosmic rays, and relevant to access direct detection for dark matter particle masses lighter than 1 GeV both with target nuclei and/or electrons. From the null detection of a signal by XENON1T, MiniBooNE, and Borexino with nulcei (by Super-K with electrons) we have derived limits on dark matter-nucleus spin-independent and spin-dependent (dark-matter-electron) scattering cross sections which, depending on the modelization of the source, can improve on other currently available bounds for light DM candidates of one up to five orders of magnitude.
We consider the well-motivated scenario of dark matter annihilation with a velocity-dependent cross section. At higher speeds, dark matter annihilation may be either enhanced or suppressed, which affects the relative importance of targets like galactic subhalos, the Galactic Center, or extragalactic halos. We consider a variety of new strategies for determining the associated J-factors, and for extracting information about the velocity-dependence of the cross section from gamma-ray data, including the study of non-Poisson fluctuations in the photon count, and the use of likelihood-free inference.
The large gap between a galactic dark matter subhalo's velocity and its own gravitational binding velocity creates the situation that dark matter soft-scattering on baryons to evaporate the subhalo, if kinetic energy transfer is efficient by low momentum exchange. Small subhalos can evaporate before dark matter thermalize with baryons due to the low binding velocity. In case dark matter acquires an electromagnetic dipole moment, the survival of low-mass subhalos requires stringent limits on the photon-mediated soft scattering. We calculate the subhalo evaporation rate via soft collision by ionization gas and accelerated cosmic rays, and place an upper limit on the DM's electromagnetic form factor by assuming the survival of subhalos in the ionized Galactic interior. We also show that subhalos lighter than $10^{-5}M_{\odot}$ in the gaseous inner galactic region are subject to evaporation via dark matter’s effective electric and magnetic dipole moments below current direct detection limits.
We go beyond the state-of-the-art by combining first principle lattice results and effective field theory approach as Polyakov Loop model to explore the non-perturbative dark deconfinement-confinement phase transition and the generation of gravitational-waves in a pure gluon dark Yang-Mills theory. We further include fermions with different representations in the dark sector. Employing the Polyakov-Nambu-Jona-Lasinio (PNJL) model, we discover that the relevant gravitational wave signatures are highly dependent on the various representations. We also find a remarkable interplay between the deconfinement-confinement and chiral phase transitions. In both scenarios, the future Big Bang Observer experiment has a higher chance to detect the gravitational wave signals.
Physics in (canonical) quantum gravity needs to be manifestly diffeomorphism-invariant. Consequently, physical observables need to be formulated in terms of manifestly diffeomorphism-invariant operators, which are necessarily composite. This makes an evaluation in general involved, even if the concrete implementation of quantum gravity should be treatable (semi-)perturbatively in general.
A similar problem exists also in flat-space gauge theories, even at arbitrarily weak coupling. In such cases a mechanism developed by Fröhlich, Morchio and Strocchi turns out to be highly successful in giving analytical access to the bound state properties. As will be shown, the conditions under which it can be applied are also satisfied by many quantum gravity theories. Its application will be illustrated by applying it to a canonical quantum gravity theory to determine the leading properties of curvature excitations and particles with and without spin.
The all-order structure of scattering amplitudes is greatly simplified by the use of (generalized) Wilson line operators, describing (subleading) soft emissions from straight lines extending to infinity. In this talk I will review how these techniques (originally developed for QCD phenomenology) can be naturally applied to gravitational scattering. At the quantum level, we find a convenient way to derive the exponentiation of the (subleading) graviton Reggeization. At the classical level, the formalism provides a powerful tool for the computation of observables relevant in the gravitational wave program.
The radion equilibrium in the Randall-Sundrum model is guaranteed by the backreaction of a bulk scalar field. We studied the radion dynamics in an extended scenario, where an intermediate brane exists in-between the UV and IR branes. We conducted an analysis in terms of the Einstein’s equations and effective Lagrangian after applying the Goldberger-Wise mechanism. Our result elucidates that in the multibrane RS model, a unique radion field is conjectured as legitimate in the RS metric perturbation.
We present a method to obtain a scalar potential at tree level from a pure gauge theory on nilmanifolds, a class of negatively-curved compact spaces, and discuss the spontaneous symmetry breaking mechanism induced in the residual Minkowski space after compactification at low energy. We show that the scalar potential is completely determined by the gauge symmetries and the geometry of the compact manifold. In order to allow for simple analytic calculations we consider three extra space dimensions as the minimal example of a nilmanifold, therefore considering a pure Yang-Mills theory in seven dimensions. We further investigate the effective potential at one-loop and the spectrum when fermions are included.
While CP violation has not been observed so far in processes mediated by the strong force, the QCD Lagrangian admits a CP-odd topological term proportional to the so-called theta angle, which weighs the contributions to the partition function from different topological sectors. The observational bounds are usually interpreted as demanding a severe tuning of theta against the phases of the quark masses, which constitutes the strong CP problem. In this talk, we challenge this view and argue that when taking the correct 4d infinite volume limit the theta angle drops out of correlation functions, so that it becomes unobservable and the CP symmetry is preserved. We arrive at this result by either using instanton computations or by relying on general arguments based on the cluster decomposition principle and the index theorem.
ALICE is the experiment at the LHC specifically designed to study the properties of the quark-gluon plasma (QGP), a deconfined state of matter created in ultrarelativistic heavy-ion collisions. In this context, light-flavour particle production measurements play a key role, as they can probe statistical hadronization and partonic collectivity. Recent measurements in small collision systems (pp and p-Pb) highlighted a progressive onset of collective phenomena where charged-particle multiplicity is the driving quantity for all the considered observables. This evidence raised the question: what is the smallest hadronizing system which features collective-like phenomena? For this reason, small collision systems play a key role in the study of particle production in high-granular multiplicity intervals, going from low centre-of-mass energies to higher ones. In this contribution, final results on the production of light-flavour hadrons in pp collision at $\sqrt{s}$ = 5.02 TeV will be presented, extending at low multiplicity the observations reported in pp, p-Pb and A-A interactions. Final considerations will be discussed concerning the system-size dependence of charged-particle distributions in ultra-thin multiplicity intervals. Finally, a first look into the newest 900 GeV pp data sample collected in October 2021 will also be proposed to reach the lowest multiplicity ever probed at the LHC.
One of the main goals of the STAR experiment is to map the QCD phase diagram. The flow harmonics of azimuthal anisotropy ($v_{2}$ and $v_{3}$) of particles are sensitive to the initial dynamics of the medium. The first phase of RHIC Beam Energy Scan Phase-I (BES-I) program demands a precision measurement of $v_{2}$ and $v_{3}$ specifically for $\phi$ mesons and multi-strange hadrons in the low energy regimes.
STAR has recently finished the data taking for Beam Energy Scan Phase-II (BES-II) program with higher statistics, improved detector condition, and wider pseudorapidity coverage compared to what was available during BES-I program. In this talk, we will present the measurements of $v_{2}$ and $v_{3}$ of strange and multi-strange hadrons ($K_{S}^{0}$, $\Lambda (\bar{\Lambda})$, $\phi$, $\Xi^{-} (\bar{\Xi}^{+})$, and $\Omega^{-} (\bar{\Omega}^{+})$) at $\sqrt{s_{NN}}$ = 14.6 and 19.6 GeV. The centrality dependence, the number of constituent quark (NCQ) scaling, and baryon to anti-baryon difference in $v_{2}$ and $v_{3}$ will be presented. Finally, the physics implications of our measurements in the context of partonic collectivity will be discussed.
Strange and multi-strange hadrons have a small hadronic cross-section compared to light hadrons, making them an excellent probe for understanding the initial stages of relativistic heavy-ion collisions and dynamics of QCD matter. Isobar collisions, $^{96}_{44}$Ru+$^{96}_{44}$Ru and $^{96}_{40}$Zr+$^{96}_{40}$Zr, at $\sqrt{s_{\mathrm {NN}}}$ = 200 GeV have been performed at RHIC. These collisions are considered to be an effective way to minimize the flow-driven background contribution to search for the possibly small CME signal. The deformation parameters are different between the two species and flow measurements are highly sensitive to them. Elliptic flow measurements for these collisions also give direct information about the initial-state spatial anisotropies. The collected datasets include approximately two billion events for each of the isobar species and provide a unique opportunity for statistics hungry measurements such as flow coefficients of multi-strange hadrons.
In this talk, we will present the elliptic flow ($v_{2}$) of $K_{s}^{0}$, $\Lambda$, $\bar{\Lambda}$, $\phi$, $\Xi^{-}$, $\bar{\Xi}^{+}$, $\Omega^{-}$, and $\bar{\Omega}^{+}$ at mid-rapidity ($|y|$ $<$ 1.0) for Ru+Ru and Zr+Zr collisions at $\sqrt{s_{\mathrm {NN}}}$ = 200 GeV. The dependence of $v_{2}$ on centrality and transverse momentum ($p_{T}$) will be shown. The results will be compared with data from other collision systems like Cu+Cu, Au+Au, and U+U. The physics implications of such measurements in the context of nuclear deformation in isobars will be also discussed.
One of the key challenges of hadron physics today is understanding the origin of strangeness enhancement in high-energy hadronic collisions, i.e. the increase of (multi)strange hadron yields relative to non-strange hadron yields with increasing charged-particle multiplicity. In particular, what remains unclear is the relative contribution to this phenomenon from hard and soft QCD processes and the role of initial-state effects such as effective energy. The latter is the difference between the total centre-of-mass energy and the energy of leading baryons emitted at forward/backward rapidities. The superior tracking and particle-identification capabilities of ALICE make this detector unique in measuring (multi)strange hadrons via the reconstruction of their weak decays over a wide momentum range. The effective energy is measured using zero-degree hadronic calorimeters (ZDC).
In this talk, recent results on K$^0_S$ and $\Xi$ production in- and out-of-jets in pp collisions at $\sqrt{s}$=13 TeV using the two-particle correlation method are presented. To address the role of initial and final state effects, a double differential measurement of (multi)strange hadron production as a function of multiplicity and effective energy is also presented. The results of these measurements are compared to expectations from state-of-the-art phenomenological models implemented in commonly used Monte Carlo event generators.
The LHCb spectrometer has the unique capability to function as a fixed-target experiment by injecting gas into the LHC beampipe while proton or ion beams are circulating. The resulting beam+gas collisions cover an unexplored energy range that is above previous fixed-target experiments, but below the top RHIC energy for AA collisions. Here we present new results on antiproton and charm production from pHe, pNe, and PbNe fixed-target collisions at LHCb. Comparisons with various theoretical models of particle production and transport through the nucleus will be discussed.
The MoEDAL experiment deployed at IP8 on the LHC ring was the first dedicated search experiment to take data at the LHC in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, and massive slowly moving charged particles in p-p and heavy-ion collisions. We will report on our most result, recently reported in Nature, of our search for magnetic monopole production via Schwinger production.
Schwinger showed that electrically-charged particles can be produced in a strong electric field by quantum tunnelling through the Coulomb barrier. By electromagnetic duality, if magnetic monopoles (MMs) exist, they would be produced by the same mechanism in a sufficiently strong magnetic field. Unique advantages of the Schwinger mechanism are that its rate can be calculated using semiclassical techniques without relying on perturbation theory, unlike magnetic monopole production via the Drell-Yan mechanism. Also, importantly, production of non-pointlike magnetic monopoles is not exponentially suppressed by the finite size of the monopole.
Pb-Pb heavy-ion collisions at the LHC produce the strongest known magnetic
fields in the current Universe. This result is arguably the first at the LHC that relies directly on the unprecedented magnetic fields produced in heavy-ion collisions.
Very detailed measurements of Higgs boson properties and its interactions can be performed with the full Run 2 pp collision dataset collected at 13 TeV by using its decays into bosons, shining light over the electroweak symmetry breaking mechanism. This talk presents the latest measurements of the Higgs boson coupling properties by the ATLAS experiment in various bosonic decay channels, e. Results on production mode cross sections, Simplified Template Cross Sections, and their interpretations are presented. Specific scenarios of physics beyond the Standard Model are tested, as well as a generic extension in the framework of the Standard Model Effective Field Theory.
Thanks to the statistics of pp collision collected by the ATLAS experiment a 13 TeV a the LHC Run 2, detailed measurements of Higgs boson properties and its interactions can be performed using its decays into fermions, shining light over the properties of the Yukawa interactions. This talk presents the latest measurements of the Higgs boson properties by the ATLAS experiment in various fermionic decay channels, including Higgs production in association with top quarks, Simplified Template Cross Sections, and their interpretations. Specific scenarios of physics beyond the Standard Model are tested, as well as a generic extension in the framework of the Standard Model Effective Field Theory.
This talk will cover measurements of Higgs boson differential cross section measurements with fermionic decay channels and in ttH production, including fiducial differential cross-sections and STXS results.
This talk will cover measurements of Higgs boson differential cross section measurements with boson decay channels, including fiducial differential cross-sections and STXS results.
With the pp collision dataset collected at 13 TeV, detailed measurements of Higgs boson properties can be performed. The Higgs kinematic properties can be measured with increasing granularity, and interpreted to constrain beyond-the-Standard-Model phenomena. This talk presents the measurements of the Higgs boson fiducial and differential cross sections exploiting the Higgs decays into bosons, as well as their combination and interpretations.
The discovery of the Higgs boson ten years ago and successful measurement of the Higgs boson couplings to third generation fermions by ATLAS and CMS mark great milestones for HEP. The much weaker coupling to the second generation quarks predicted by the SM makes the measurement of the Higgs-charm coupling much more challenging. With the full run-2 data collected by the CMS experiment, a lot of progress has been made to constrain this coupling. In this talk, we present the latest results of direct and indirect measurements of the HIggs-charm coupling by the CMS experiment. Prospects for future improvements are also given.
With the full Run2 pp collision dataset collected at 13 TeV, the interactions of Higgs boson to third generation fermion have been established. For the understanding of Yukawa interaction mechanism, it is crucial to establish the interactions to second generation fermions. This talk presents the latest searches for the Higgs boson decays into second generation fermions, as well as for other Higgs rare decay modes, including the decays into quarkonia plus photon final states.
T2K is a long baseline neutrino oscillation experiment, which studies the oscillations of the neutrinos from a beam produced using the J-PARC accelerator. The beam neutrinos propagate over 295 km before reaching the Super-Kamiokande detector, where they can be detected after having oscillated. The ability of the experiment to run with an either neutrino or anti-neutrino beam makes it well suited to study the differences between the oscillations of neutrinos, in particular to look for a possible violation of CP symmetry in the lepton sector.
T2K has produced a new analysis of its first 10 years of data, with improved models to describe neutrino interactions and fluxes as well as additional samples for its near and far detector analyses. We will present the results on the measurement of the parameters describing neutrino oscillations obtained with this new analysis.
T2K is undergoing major upgrades with an improved beam power, an upgraded near detector and the loading of Super-Kamiokande with Gadolinium. The status of these upgrades and prospects for future T2K measurements will be discussed. In parallel, T2K has been working on joint analyses with other experiments, and we will give an update on the perspective of the 2 joint analyses in preparation, with respectively the Super-Kamiokande and the NOvA collaborations.
Neutrino oscillation physics has now entered the precision era. In parallel with needing larger detectors to collect more data with, future experiments further require a significant reduction of systematic uncertainties with respect to what is currently available. In the neutrino oscillation measurements from the T2K experiment the systematic uncertainties related to neutrino interaction cross sections are currently the most dominant. To reduce this uncertainty a much improved understanding of neutrino-nucleus interactions is required. In particular, it is crucial to better understand the nuclear effects which can alter the final state topology and kinematics of neutrino interactions in such a way which can bias neutrino energy reconstruction and therefore bias measurements of neutrino oscillations.
The upgraded ND280 detector, that will consist in a totally active Super-Fine-Grained-Detector (sFGD), two High Angle TPC (HA-TPC) and six TOF planes, will directly confront our naivety of neutrino interactions thanks to its full polar angle acceptance and a much lower proton tracking threshold. Furthermore, neutron tagging capabilities in addition to precision timing information will allow the upgraded detector to estimate neutron kinematics from neutrino interactions. Such improvements permit access to a much larger kinematic phase space which correspondingly allows techniques such as the analysis of transverse kinematic imbalances to offer remarkable constraints of the pertinent nuclear physics for T2K analyses.
The SuperFGD, a highly segmented scintillator detector, acting as a fully active target for the neutrino interactions, is a novel device with dimensions of ~2x1.8x0.6 m3 and a total mass of about 2 tons. It consists of about 2 million of small scintillator cubes each of 1 cm3. The signal readout from each cube is provided by wavelength shifting fibres connected to MPPCs. The total number of channels will be ~60,000 and the cubes have already been produced and assembled in 𝑥−𝑦 layers.
The HA-TPC will be used for 3D track reconstruction, momentum measurement and particle identification. These TPC, with overall dimensions of 2x2x0.8 m3, will be equipped with 32 resistive MicroMegas (ERAM). The thin field cage (3 cm thickness, 4% rad. length) will be realized with laminated panels of Aramid and honeycomb covered with a kapton foil with copper strips. The 34x42 cm2 resistive bulk Micromegas will use a 500 kOhm/square DLC foil to spread the charge over the pad plane, each pad being ~1 cm2. The electronics is based on the AFTER chips.
The time-of-flight (TOF) will consist of 6 planes with about 5 m2 surface area surrounding the SuperFGD and the TPCs. Each plane has been assembled with 2.2 m long cast plastic scintillator bars with light collected by arrays of large-area MPPCs from two ends.
In this talk we will present the status of the construction of the different subdetectors towards their installation at J-PARC, expected for the first half of 2023 and we will describe the expected performances of this new detector.
NOvA is a long-baseline neutrino oscillation experiment with a beam and near detector at Fermilab and a far detector 810 km away in northern Minnesota. It features two functionally identical scintillator detectors. By measuring muon neutrino disappearance and electron neutrino appearance as a function of energy in both neutrinos and antineutrinos, NOvA can measure the parameters of the PMNS matrix which describe the known 3-flavor oscillations as well as constrain potential new physics which impacts neutrino oscillations. In this talk, we will present recent results from NOvA on both standard and non-standard oscillations.
The NOvA experiment is a long-baseline accelerator neutrino oscillation experiment. NOvA uses the upgraded NuMI beam from Fermilab and measures electron neutrino appearance and muon neutrino disappearance at its Far Detector in Ash River, Minnesota. NOvA is a pioneer in the neutrino community to use classification and regression convolutional neural networks with direct pixel map inputs for particle identification and energy reconstruction. NOvA is also developing new deep-learning techniques to improve interpretability, robustness, and performance for the next generation of analyses. In this talk, I will discuss the development of deep-learning-based reconstruction methods at NOvA.
T2K is a long baseline neutrino experiment producing a beam of muon neutrinos and antineutrinos at the Japan Particle Accelerator Research Centre (JPARC) and measuring their oscillation by comparing the measured neutrino rate and spectrum at a near detector complex, located at JPARC, and at the water-Cherenkov detector Super Kamiokande, located 295 Km away.
Such intense neutrino beam and the set of near and far detectors offer a unique opportunity to measure neutrino cross-sections for interactions on different nuclei (C and O primarily), for different neutrino energies and flavours. In particular, the combination of near detectors at different off-axis angles, enable an improved control on the energy-dependence of the neutrino cross-section. T2K is also pioneering new analysis techniques which target the exclusive measurement of the neutrino-interaction final state, including the kinematics of its hadronic part. An overview of the most recent T2K cross-section analyses will be presented, including a new measurement of coherent pion production in neutrino and antineutrino scattering on Carbon nuclei.
The scintillator-based near detector of the NOvA oscillation experiment sits in the NuMI neutrino beam, and thus has access to unprecedented neutrino scattering datasets. Thanks to the reversible focusing horns, large samples of both neutrino and antineutrino interactions have been recorded. Leveraging these datasets, NOvA can make a variety of double-differential cross-section measurements with world-leading statistical precision to constrain neutrino interaction models and inform oscillation experiments. In this talk we will present recent cross section results for both neutrinos and antineutrinos.
MINER$\nu$A is a neutrino-nucleus interaction experiment in the Neutrino Main Injector (NuMI) beam at Fermilab. With the $\langle E_{\nu}\rangle = 6\,\, \text{GeV}$ Medium Energy run complete and $12 \times 10^{20}$ protons on target delivered in neutrino and antineutrino mode, MINER$\nu$A combines a high statistics reach and the ability to make precise cross-section measurements in more than one dimensions. Analyses of plastic scintillator and nuclear target data constrain interaction models, providing feedback to neutrino event generators and driving down systematic uncertainties for future oscillation experiments. Specifically, MINER$\nu$A probes both the intrinsic neutrino scattering and the extrinsic nuclear effects which complicate the interactions. Generally, nuclear effects can be separated into initial- and final-state interactions, both of which are not known a priori to the precision needed for oscillation experiments. By fully exploiting the precisely measured final-state particles out of different target materials in the MINERvA detector, these effects can be accurately probed. In this talk, the newest MINER$\nu$A analyses since the last ICHEP, which encompass a broad physics range, will be presented: inclusive cross-section measurements in the tracker and in situ measurements of the delivered flux, allowing detailed comparisons with generator predictions, and control of systematic flux uncertainties, respectively. Moreover, by exploiting the significant statistics reach offered by the large exposure, MINER$\nu$A measures rare processes.
With proton-proton collisions about to restart at the Large Hadron Collider (LHC) the ATLAS detector will double the integrated luminosity the LHC accumulated in the ten previous years of operation. After this data-taking period the LHC will undergo an ambitious upgrade program to be able to deliver an instantaneous luminosity of $7.5\times 10^{34}$ cm$^{-2}$ s$^{-1}$ allowing to collect more than 3 ab$^{-1}$ of data at $\sqrt{s}=$14 TeV. This unprecedented data sample will allow ATLAS to perform several precision measurements to constrain the Standard Model Theory (SM) in yet unexplored phase-spaces, in particular in the Higgs sector, a phase-space only accessible at the LHC. The price to pay to be able to collect such a rich data-sample is to upgrade the detector to cope with the challenging experimental conditions that include huge levels of radiation and pile-up events about a factor 5 higher than in the present condition. The ATLAS upgrade comprises a completely new all-silicon tracker with extended rapidity coverage that will replace the current inner tracker detector; a redesigned trigger and data acquisition system for the calorimeters and muon systems allowing the implementation
of a free-running readout system. Finally, a new subsystem called High Granularity Timing Detector that will aid the track-vertex association in the forward region by incorporating timing information into the reconstructed tracks. A final ingredient, relevant to almost all measurements, is a precise determination of the delivered luminosity with systematic uncertainties below the percent level. This challenging task will be achieved by collecting the information from several detector systems using different and complementary techniques.
This presentation starting from the HL-LHC physics goals will describe the ATLAS detector upgrade status and the main results obtained with the prototypes, giving a synthetic, yet global, view of the whole upgrade project.
The increase of the particle flux at the HL-LHC with instantaneous luminosities up to L ≃ 7.5 × 10^34 cm^−2s^−1 will have a severe impact on the ATLAS detector performance. The forward region where the liquid Argon calorimeter has coarser granularity and the inner tracker has poorer momentum resolution will be particularly affected. A High Granularity Timing Detector (HGTD) will be installed in front of the LAr end-cap calorimeters for pile-up mitigation and luminosity
measurement. The HGTD is a novel detector introduced to augment the new all-silicon Inner Tracker in the pseudo-rapidity range from 2.4 to 4.0, adding the capability to measure charged-particle trajectories in time and space. Two silicon-sensor double-sided layers will provide precision timing information for MIP particles with a resolution of 30 ps per track in order to assign each particle to the correct vertex. Readout cells have a size of 1.3 × 1.3 mm2 leading to a highly granular detector with 3.7 million channels. Low Gain Avalanche Detectors technology has been chosen as it provides suitable gain to reach the large signal over noise ratio needed. Requirements and overall specifications of the HGTD will be presented as well as the technical design and the project status. The on-going R&D effort carried out to study the sensors, the readout ASIC, and the other components, supported by laboratory and test beam results, will also be presented.
The Upgrade II of the LHCb experiment is proposed for the long shutdown 4 of the LHC. The upgraded detector will operate at a maximum luminosity of $1.5x10^{34}$cm$^{-2}$s$^{-1}$, with the aim of integrating ~300 fb$^{-1}$ through the lifetime of the high-luminosity LHC (HL-LHC). The collected data will allow to fully exploit the flavour-physics opportunities of the HL-LHC, probing a wide range of physics observables with unprecedented accuracy. The accomplishment of this ambitious programme will require that the current detector performance is maintained at the maximum expected pile-up of ~40, and even improved in certain specific domains. To meet this challenge, it is foreseen to replace all of the existing spectrometer components to increase the granularity, reduce the amount of material in the detector and to exploit the use of new technologies including precision timing of the order of a few tens of picoseconds. In this talk the physics goals of the project will be reviewed, as well as the detector design and technology options which will allow to meet the desired specifications.
The Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC) is undergoing an extensive upgrade program to prepare for the challenging conditions of the High-Luminosity LHC (HL-LHC). A new timing detector in CMS will measure minimum ionizing particles (MIPs) with a time resolution of ~40-50 ps per hit and coverage up to |η|=3. The precision time information from this MIP Timing Detector (MTD) will reduce the effects of the high levels of pileup expected at the HL-LHC and will bring new and unique capabilities to the CMS detector. The endcap region of the MTD, called the endcap timing layer (ETL), must endure high fluences, motivating the use of thin, radiation tolerant silicon sensors with fast charge collection. As such, the ETL will be instrumented with silicon low-gain avalanche diodes (LGADs), covering the high-radiation pseudo-rapidity region 1.6 < |η| < 3.0. The LGADs will be read out with the ETROC readout chip, which is being designed for precision timing measurements. We will present the extensive developments and progress made for the ETL detector, from sensors to readout electronics, mechanical design, and plans for system testing. In addition, we will present test beam results, which demonstrate the desired time resolution.
The MIP Timing Detector (MTD) is a new sub-detector planned for the Compact Muon Solenoid (CMS) experiment at CERN, aimed at maintaining the excellent particle identification and reconstruction efficiency of the CMS detector during the High Luminosity LHC (HL-LHC) era. The MTD will provide new and unique capabilities to CMS by measuring the time-of-arrival of minimum ionizing particles with a resolution of 30-40 ps for MIP signals at a rate of 2.5 Mhit/s per channel at the beginning of HL-LHC operation. The precision time information provided by the MTD will reduce the effects of the high levels of pileup expected at the HL-LHC by enabling the use of 4D reconstruction algorithms. The central barrel timing layer (BTL) of the MTD uses a sensor technology consisting of LYSO:Ce scintillating crystal bars coupled to SiPMs, one at each end of the bar, read out with TOFHIR ASICs for the front-end. We present an overview of the MTD BTL design and show test beam results demonstrating the achievement of the target time resolution of about 30 ps.
The intriguing phenomena emerging in the high-density QCD matter are being widely studied in the heavy ion program at the LHC and will be understood more deeply during the high luminosity LHC (HL-LHC) era. The CMS experiment is under the Phase II upgrade towards the HL-LHC era. A new timing detector is proposed with timing resolution for minimum ionization particles (MIP) to be 30ps. The MIP timing detector (MTD) will provide the particle identification (PID) ability with a large acceptance covering up to $\eta<3$ through time-of-flight (TOF). Combining MTD with the other new sub-detectors, a tracker with acceptance $\eta<4$, high granularity calorimeters with acceptance covering $\eta<5$, will enable the deep studies of high-density QCD matters in ultra-relativistic heavy ion collisions. In this presentation, the performances of a broad range of measurements in heavy ion programs will be discussed using TOF-PID. These include the (3+1)D evolution of heavy flavor quarks, QGP medium response to high-$p_\mathrm{T}$ parton energy loss at wide jet cone angles, collectivity in small systems, fluctuations and transport of initially conserved charges, and light nuclei physics.
We report results of the branching fractions $\mathcal{B}(\bar{B}^0\to D^{*+}\pi^{-})$ and $\mathcal{B}(\bar{B}^0\to D^{*+}K^{-})$ measured using $772\times 10^{6}$ $B$-meson pairs recorded by the Belle experiment at the KEKB asymmetric-energy $e^{+}e^{-}$ collider. The measurements provide a precise test of QCD factorization. We also report the study of the branching fraction $B^+ \to D_s^{(*)}(\eta,K_S) / D^+(\eta,K_S)$ and the time-dependent CP asymmetry in $B \to \eta_c K_S$. The latter measurement provides information about $\sin{2\phi_1}$.
The latest studies of beauty meson decays to open charm final states from LHCb are presented. Several first observations and branching fraction measurements using Run 1 and Run 2 data samples are shown. These decay modes will provide important spectroscopy information and inputs to other analyses.
The tree-level determination of the CKM angle gamma is a standard candle measurement of CP violation in the Standard Model. The latest LHCb results from measurements of CP violation using beauty to open charm decays are presented. These include measurements using the full LHCb Run 1+2 data sample and the latest LHCb gamma & charm mixing combination.
The investigation of $B$-meson decays to charmed and charmless hadronic final states is a keystone of the Belle II physics program. It allows for theoretically reliable and experimentally precise constraints on the CKM Unitarity Triangle fit, and is sensitive to effects from non-SM physics. Results on branching ratios, direct CP-violating asymmetries, and polarization of various charmless B decays are presented, with particular emphasis on those for which Belle II will have unique sensitivity. New results from combined analyses of Belle and Belle II data to determine the CKM angle $\phi_3$ (or $\gamma$) are also presented. Perspectives on the precision achievable on the CKM angles and on the so called "$K\pi$ puzzle” are also discussed.
The ATLAS experiment has performed measurements of B-meson rare decays proceeding via suppressed electroweak flavour changing neutral currents, and of mixing and CP violation in the neutral $B_s$ meson system. This talk will focus on the latest results from the ATLAS collaboration, such as rare processes $B^0_s \to \mu \mu$ and $B^0_d \to \mu \mu$, and $CP$ violation in $B^0_s \to J/\psi\ \phi$ decays. In the latter, the Standard Model predicts the $CP$ violating mixing phase, $\phi_s$, to be very small and its SM value is very well constrained, while in many new physics models large $\phi_s$ values are expected. The latest measurements of $\phi_s$ and several other parameters describing $B^0_s \to J/\psi\ \phi$ decays will be reported.
The presence of charmonium in the final states of B decays is a very clean experimental signature that allow the efficient collection of large samples of these decays. In addition, decays of beauty hadrons to final states with charmonium resonances proceed mainly through $b\to ccq$ tree-level transitions. The negligible penguin pollutions make these decays excellent probes of Standard Model quantities like the $B_0$ and $B_s$ mixing phases. In this work we present the most recent results of LHCb in the study of these decays.
The HERAPDF2.0 ensemble of parton distribution functions (PDFs) was introduced in 2015. The final stage is presented, a next-to-next-to-leading-order (NNLO) analysis of the HERA data on inclusive deep inelastic $ep$ scattering together with jet data as published by the H1 and ZEUS collaborations. A perturbative QCD fit, simultaneously of $\alpha_s(M_Z^2)$ and the PDFs, was performed with the result $\alpha_s(M_Z^2) = 0.1156 \pm 0.0011~{\rm (exp)}~ ^{+0.0001}_{-0.0002}~{\rm (model}$ ${\rm + parameterisation)}~ \pm 0.0029~{\rm (scale)}$. The PDF sets of HERAPDF2.0Jets NNLO were determined with separate fits using two fixed values of $\alpha_s(M_Z^2)$, $\alpha_s(M_Z^2)=0.1155$ and $0.118$, since the latter value was already chosen for the published HERAPDF2.0 NNLO analysis based on HERA inclusive DIS data only. The different sets of PDFs are presented, evaluated and compared. The consistency of the PDFs determined with and without the jet data demonstrates the consistency of HERA inclusive and jet-production cross-section data. The inclusion of the jet data reduced the uncertainty on the gluon PDF. Predictions based on the PDFs of HERAPDF2.0Jets NNLO give an excellent description of the jet-production data used as input.
We discuss recent developments related to the the latest release of the NNPDF family of global analyses of parton distribution functions: NNPDF4.0. This PDF set expands the NNPDF3.1 determination with 44 new datasets, mostly from the LHC. We derive a novel methodology through hyperparameter optimisation, leading to an efficient fitting algorithm built upon stochastic gradient descent. Theoretical improvements in the PDF description include a systematic implementation of positivity constraints and integrability of sum rules. We validate our methodology by means of closure tests and “future tests” (i.e. tests of backward and forward data compatibility), and assess its stability, specifically upon changes of PDF parametrization basis. We compare NNPDF4.0 with its predecessor as well as other recent global fits, and study its phenomenological implications for representative collider observables. We discuss recent results of related studies building upon the open-source NNPDF framework.
We present fits to determine parton distribution functions (PDFs) using a diverse set of measurements from the ATLAS experiment at the LHC, including inclusive W and Z boson production, ttbar production, W+jets and Z+jets production, inclusive jet production and direct photon production. These ATLAS measurements are used in combination with deep-inelastic scattering data from HERA. Particular attention is paid to the correlation of systematic uncertainties within and between the various ATLAS data sets and to the impact of model, theoretical and parameterisation uncertainties.
With detector instrumented in the forward region, the collected Z boson events in the LHCb acceptance can be used to probe the proton structure in a phase space region not accessible by other LHC experiments. In this talk, the latest Z boson production measurements will be presented, as well as the measurement of Z+ c jet events for probing intrinsic charm. The potential contributions of the LHCb data to the global Parton Distribution Functions fits will be demonstrated via these analyses, including the sea quark in the larger x region, the transverse momentum dependent Parton Distribution Functions, and the intrinsic charm in the proton.
The QCD strong coupling (alpha_s) and the parton distribution functions (PDFs) of the proton are fundamental ingredients for phenomenology at high-energy facilities such as the Large Hadron Collider (LHC).
It is therefore of crucial importance to estimate any theoretical uncertainties associated to them.
Both alpha_s and PDFs obey their own renormalisation-group equations (RGEs) whose solution determines their scale evolution.
Although the kernels that govern these RGEs have been computed to very high perturbative precision, they are not exactly known.
In this contribution, we present a procedure that allows us to assess the uncertainty on the evolution of alpha_s and PDFs due to our imperfect knowledge of their respective evolution kernels.
Inspired by transverse-momentum and threshold resummation, we introduce additional scales, that we dubbed resummation scales, that can be varied to estimate the uncertainty on the evolution of alpha_s and PDFs at any scale.
As a test case, we consider inclusive deep-inelastic-scattering structure functions in a region relevant for the extraction of PDFs.
We study the effect of varying these resummation scales and compare it to the usual renormalisation and factorisation scale variations.
We present EKO and yadism, a new DGLAP evolution and DIS code respectively, able to provide PDF independent operators, for fast predictions evaluation.
They both support a wide range of physics and computational features, with a Python API to access the individual ingredients (e.g. strong coupling evolution), and file based output for a language agnostic consumption of the results. They are both interfaced with a third grid storage library, PineAPPL.
Both projects have been developed as open, modular, and extensible frameworks, encouraging community contributions and inspection.
A first application of the evolution code will be presented, unveiling the intrinsic charm content of the proton.
The LHeC and the FCC-he are the cleanest, high resolution microscopes that the world can build in the nearer future. Through a combination of neutral and charged currents and heavy quark tagging, they will unfold the parton structure of the proton with full flavour decomposition and unprecedented precision. In this talk we will present the most recent studies on the determination of proton parton densities. We will also present the results on the determination of the strong coupling constant through the measurement of total and jet cross sections. Finally, we will also comment on diffraction, both inclusive and exclusive, as a tool to get more differential information on the proton.
Reference: P. Agostini et al. (LHeC Study Group), The Large Hadron-Electron Collider at the HL-LHC, J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
Precision measurements of the production cross-sections of W/Z boson at LHC provide important tests of perturbative QCD and information about the parton distribution functions for quarks within the proton. This talk will present recent differential Z+jets results in extreme phase-spaces with high pT jets. The measurement is compared to the state-of-the-art NNLO theoretical predictions. If available, we will also present measurements of Z decays to a pair of leptons and a photon, which is a sensitive test of the kinematics of final-state QED radiation.
The large amount of data collected by the CMS experiment at the CERN LHC provides unprecedented opportunities to perform precision measurements of the standard model, which allow an accurate validation of the theory and might potentially reveal hints of new physics. Thanks to their leptonic decays, W and Z bosons guarantee a clean final state, and their relatively high production cross section permits the measurement of their properties with low systematic uncertainties and usually negligible statistical uncertainty. This talk presents an overview of recent precision measurements of electroweak bosons’ properties and cross sections, carried out by CMS using Run 2 data. In addition, prospects for future physics results expected from the High-Luminosity phase of the HLC, and fostered by the planned detector upgrade, are also discussed.
The LHC produces a vast sample of top quark pairs and single top quarks. Measurements of the inclusive top quark production rates at the LHC have reached a precision of several percent and test advanced Next-to-Next-to-Leading Order predictions in QCD. Differential measurements in several observables are important to test SM predictions and improve Monte Carlo generator predictions. In this contribution, comprehensive measurements of top-quark-antiquark pair and single-top-quark production are presented that use data recorded by the ATLAS experiment in the years 2015-2018 during Run 2 of the LHC. A recent result from the 5 TeV operation of the LHC is also included.
Recent measurements of top quark pair production inclusive and differential cross sections are presented using the data collected by the CMS detector. The differential cross sections are measured multi-differentially as a function of various kinematic observables of top quarks, jets, leptons of the event final state. Results are compared to precise theory calculations, among them also MINNLO+PS for the first time.
The LHCb experiment covers the forward region of proton-proton collisions, and it can improve the current electroweak landscape by studying W and Z bosons in this phase space complementary to ATLAS and CMS. Thanks to the excellent detector performance, fundamental parameters of the Standard Model can be precisely measured by studying the properties of the electroweak bosons. In this talk an overview of the wide LHCb electroweak measurement program will be presented. This include the measurement of the W boson mass and the measurement of $Z \rightarrow \mu^+ \mu^-$ angular coefficients.
The Precision Proton Spectrometer (PPS) is a subdetector of CMS introduced for the LHC Run 2, which provides a powerful tool for advancing BSM searches. The talk will discuss the key features of proton reconstruction (PPS alignment and optics calibrations), validation chain with physics data (using exclusive dilepton events), and finally the new results on exclusive diphoton, ttbar, Z+X, and diboson production explored with PPS will be presented, illustrating the unique sensitivity which can be achieved using proton tagging.
The Compact Linear Collider (CLIC) collaboration has presented a project implementation plan for construction of a 380 GeV e+e- linear collider 'Higgs and top factory' for the era beyond HL-LHC, that is also upgradable in stages to 3 TeV. The CLIC concept is based on high-gradient normal-conducting accelerating structures operating at X-band (12 GHz) frequency. Towards the next European Strategy Update a Project Readiness Report will be prepared, and the main studies towards this report will be presented.
We present the CLIC accelerator concept and the latest status of the project design and performance goals. Updated studies of the luminosity performance has allowed to consider increased luminosity for the 380 GeV stage. Studies are ongoing for further improvements.
We report on high-power tests of X-band structures using test facilities across the collaboration, as well as CLIC system verification studies and the technical development of key components of the accelerator. Key elements are the X-band components, and accelerator components important for nano beam performances.
We also present developments for application of the X-band technology to more compact accelerators for numerous applications, e.g. as X-ray FELs and in medicine. A rapidly increasing number of installations are taking the technology in use and provide important design, testing and verification opportunities, and motivate industrial developments.
Finally, the many efforts to make CLIC a sustainable and minimal power and energy consuming accelerator will be described. Design optimisation, RF power efficiency improvements and low power component development will provide a 380 GeV installation operating at around 50% of CERN's energy consumption today.
In the Superconducting rf Test Facility (STF) at High Energy Accelerator Research Organization (KEK), the cool-down tests of STF-2 cryomodules and the beam operations have been held since 2019.
STF-2 cryomodules are the same type as those for the International Linear Collider (ILC). As a result of beam operation so far, the averaged acceleration gradient of 9 cavities reached 33 MV/m, which satisfies the specification of the ILC (31.5 MV/m). This is an important milestone in demonstrating technology to realize the ILC.
Hence anomalous emittance growth after passing the accelerating cavities was seen on previous beam operation in April 2021, we observed inside the cavities by eye and confirmed there is no obstacle which was the source of this emittance growth. After checking almost same performance of accelerating cavities as those of the previous beam operation, we investigated various candidates that could cause this anomalous emittance growth in the beam operation.
For a long pulse (740us) and high current (5.8mA) beam same as those of the ILC specification, about 100 us long pulsed beam operation was demonstrated without loss. By implementing feedforward control to suppress the acceleration gradient drop due to beam loading, we could perform successful beam operation without loss. This is a powerful finding for beam operation with a pulse length equivalent to that of the ILC specification in STF-2.
We will present the outline of the cool-down test and the beam operation at STF.
The machine-detector interface (MDI) issues are one of the most complicated and challenging topics at the Circular Electron Positron Collider (CEPC). Comprehensive understandings of the MDI issues are decisive for achieving the optimal overall performance of the accelerator and detector. The CEPC machine will operate at different beam energies, from 45.5 GeV up to 120 GeV, with an instantons luminosity increasing from $3 × 10^{34} cm^{−2} s^{−1}$ for the highest energy to $3.2×10^{35} cm^{−2} s^{−1}$ or even higher for the lowest energy.
A flexible interaction region design will be plausible to allow for the large beam energy range. However, the design has to provide high luminosity that is desirable for physics studies, but keep the radiation backgrounds tolerable to the detectors. This requires a careful balance of the requirements from the accelerator and detector sides.
In this talk, the latest design of the CEPC MDI based on the current CEPC accelerator and detector design and parameters will be presented:
1. The design of the beam pipe will be presented, which would foresee several constraints: In the central region (z = ±10 cm), it should be placed as close as possible to the interaction point and with minimal material budget to allow the precise determination of the track impact parameters. But it should still stay far away enough not to interfere with the beam backgrounds. The material and coolants must be carefully chosen based on the heat load calculation. In the forward region, the beam pipe must be made by proper materials to conduct away the deposited heat in the interaction region and shield the detectors from the beam backgrounds.
2. The estimation and mitigation of beam-induced backgrounds has been simulated and will be presented. A detailed simulation covering the main contributions from synchrotron radiation, pair production, and off-momentum beam particles has been performed. The suppering/mitigating schemes have also been studied.
3. The flexible layout of the CEPC IR and the engineering efforts for several key components like the position of LumiCal, the design of the Final Focusing system, and the Cryostat Chamber will be present.
We will also discuss our future plans towards the CEPC TDR.
The Future Circular electron-positron Collider, FCC-ee, is designed for unprecedented precision for particle physics experiments from the Z-pole to above the top pair threshold. This demands a precise knowledge of the center-of-mass energy (ECM) and collision boosts at all four interaction points and all operation energies. Determining the average beam energies is foreseen using resonant depolarization, with a precision better than 100 keV. This demands transversely polarized non-colliding pilot bunches. While wigglers are foreseen to improve the polarization time, misalignment and field errors can limit the achievable polarization level, and might alter the relationship between the resonant depolarization frequency and the beam energies. Strong synchrotron radiation losses from 40 MeV per turn at the Z-pole and up to 10 GeV per turn at the highest beam energy of 182.5 GeV lead to different ECM and boosts for each interaction point and the beamstrahlung enhances this asymmetry further. Other sources of energy shifts stem from collision offsets and should be controlled. A first evaluation was made in 2019 for the European Strategy. Further studies are ongoing in the framework of the Feasibility Study to be delivered in 2025. First promising results on energy calibration and polarization are presented here.
In the context of the FCC IS European study, which investigates the feasibility of a 100 km circular $e^{+}e^{-}$ collider for the future high energy physics research, we present the status of the High Energy Booster (HEB) ring for the proposed $e^{+}e^{-}$ option. The HEB is the ring accelerating the electrons and positrons up to the nominal energy before injection into the collider. In order to perform precision measurements of the Z, W and H bosons, as well as of the top quark, unprecedented luminosities are required. To reach this goal and to fill the collider, it is mandatory to continually top up inject the beams with a comparable emittance to the collider ones, and with a bunch charge variation below few $\%$.
The main challenges of the HEB are the rapid cycling time, allowing to reach the collider equilibrium emittance, and the minimum beam energy injected into the booster that allows a stable operation.
From the ring optics point of view, one of the issues is that the final energy in the booster depends on the collider physics case. One optimum optics for a given energy case may be different from another. For the low final energies (Z, W), the characteristics time to reach the equilibrium emittance may be greater than the cycling time.
The other challenge is the injection energy. At injection, the dipole magnetic field is so low that the field quality is hardly reproducible from one cycle to another.
We present the status of the optics design of the HEB, and the impact of the magnetic field imperfections on the dynamic aperture at injection.
The high luminosity foreseen in the future electron-positron circular collider (FCC-ee) necessitates very intense multi-bunch colliding beams with very small transverse beam sizes at the collision points. This requires emittances comparable to those of the modern synchrotron light sources. At the same time, the stored beam currents should be close to the best values achieved in the last generation of particle factories. This combination of opposite factors represents a big challenge in order to preserve a high beam quality avoiding, at the same time, a machine performance degradation. As a consequence, a careful study of the collective effects and some solutions for the mitigation of foreseen instabilities is required. In this contribution we discuss the current status of these studies.
The LiteBIRD satellite (Lite satellite for the study of B-mode polarization and Inflation from cosmic background Radiation Detection) will perform the final measurement of the Cosmic Microwave Background polarization anisotropies on large and intermediate angular scales. Its sensitivity and the wide frequency coverage in 15 bands will allow an unprecedented accuracy in the measurement and foreground cleaning of the signal in B-mode polarization and a cosmic variance limited measurement of the E-mode polarization. Such measurements will have deep implications for cosmology and fundamental physics. The determination of the energy scale of inflation and the constraints on its dynamics from the B-mode polarization will shed light on one of the most important phases of the Universe history and the fundamental physics it implies. LiteBIRD measurements will deepen our knowledge of reionization allowing to reduce the largest uncertainty in standard cosmology after-Planck and will allow to explore some of the main targets of cosmology as large scale anomalies, parity violating phenomena as the cosmic birefringence, the magnetism in the early Universe, etc. I will describe the LiteBIRD mission and detail its expected scientific outcomes.
We discuss the imprints of a cosmological redshift-dependent pseudoscalar field on the rotation of Cosmic Microwave Background (CMB) linear polarization generated by a coupling $ g_\phi \phi F^{\mu\nu} \tilde F_{\mu \nu}$.
We show how either phenomenological or theoretically motivated redshift dependence of the pseudoscalar field, such as those in models of Early Dark Energy, Quintessence or axion-like dark matter, lead to CMB polarization and temperature-polarization power spectra which exhibit a multipole dependence which goes beyond the widely adopted approximation in which the redshift dependence of the linear polarization angle is neglected. Because of this multipole dependence, the isotropic birefringence effect due to a general coupling $\phi F^{\mu\nu} \tilde F_{\mu \nu}$ is not degenerate with a polarization rotation angle independent on the multipoles, which could be instead connected to a systematic miscalibration angle. By taking the multipole dependence into account, we calculate the parameters of these phenomenological and theoretical redshift dependence of the pseudoscalar field which can be detected by future CMB polarization experiments on the basis of a $\chi^2$ analysis for a Wishart likelihood.
As a final example of our approach, we compute by MCMC the minimal coupling $g_\phi$ in Early Dark Energy which could be detected by future experiments with or without marginalizing on a constant rotation angle.
Parity-violating extensions of Maxwell electromagnetism induce a rotation of the linear polarization plane of photons during propagation. This effect, known as cosmic birefringence, impacts on the Cosmic Microwave Background (CMB) observations producing a mixing of $E$ and $B$ polarization modes which is otherwise null in the standard scenario. Such an effect is naturally parametrized by a rotation angle which can be written as the sum of an isotropic component $\alpha_0$ and an anisotropic one $\delta\alpha(\hat{\mathbf{n}})$. We have computed angular power spectra and bispectra involving $\delta\alpha$ and the CMB temperature and polarization maps. In particular, contrarily to what happens for the cross-spectra, we have shown that even in absence of primordial cross-correlations between the anisotropic birefringence angle and the CMB maps, there exist non-vanishing three-point correlation functions carrying signatures of parity-breaking physics. Furthermore, we find that such angular bispectra still survive in a regime of purely anisotropic cosmic birefringence. These bispectra represent an additional observable aimed at studying cosmic birefringence and its parity-violating nature beyond power spectrum analyses. Moreover, we have estimated that among all the possible birefringent bispectra, $\langle\delta\alpha\, TB\rangle$ and $\langle\delta\alpha\,EB\rangle$ are the ones which contain the largest signal-to-noise ratio. Once the cosmic birefringence signal is taken to be at the level of current constraints, we show that these bispectra are within reach of future CMB experiments, as LiteBIRD.
In this talk, I will present a Neural Network-improved version of DarkHistory, a code package that self-consistently computes the early universe temperature, ionization levels, and photon spectral distortion due to exotic energy injections. We use simple multilayer perceptron networks to store and interpolate complicated photon and electron transfer functions, previously stored as large tables. This improvement allows DarkHistory to run on small computers without heavy memory and storage usage while preserving the physical predictions to high accuracy. It also enables us to explore adding more parametric dependence in the future to include additional physical processes and spatial resolution.
We study the production of relativistic relics, also known as dark radiation, in the early Universe and precisely compute their current contribution to the extra number of effective neutrinos. One of the dark radiation candidates is the QCD axion produced from the primordial bath in the early universe. We consider KSVZ and DFSZ axion models and investigate the axion production at different scales. The dark radiation from QCD axion leaves an imprint on the observed cosmic microwave background that can be measured by the CMB-S4 experiment.
Electric charge quantization is a long-standing question in particle physics. While fractionally charged particles (millicharged particles hereafter) have typically been thought to preclude the possibility of Grand Unified Theories (GUTs), well-motivated dark-sector models have been proposed to predict the existence of millicharged particles while preserving the possibility for unification. Such models can contain a rich internal structure, providing candidate particles for dark matter. A number of experiments have searched for millicharged particles ($\chi$s), but in the parameter space of the charge ($Q$) and mass ($m_\chi$), the region of $m_\chi > 0.1$ GeV/$\rm{c}^2$ and $Q < 10^{−3}e$ is largely unexplored.
SUB-Millicharge ExperimenT (SUBMET) has been proposed to search for sub-
millicharged particles using 30 GeV proton fixed-target collisions at J-PARC. The detector is composed of two layers of stacked scintillator bars and PMTs, and is proposed to be installed 280 m from the target. The main background is expected to be a random coincidence between the two layers due to dark counts in PMTs, which can be reduced significantly using the timing of the proton beam. With $\rm{N}_\rm{POT} = 5 × 10^{21}$, the experiment provides sensitivity to χs with the charge down to $7\times10^{−5}e$ in $m_\chi < 0.2$ GeV/$\rm{c}^2$ and $10^{-3}e$ in $m_\chi <
1.6$ GeV/$\rm{c}^2$. This is the regime largely uncovered by the previous experiments.
The Heavy Photon Search (HPS) experiment was conceived to search for a light new vector boson A’ that is kinetically mixed with the photon and has a kinetic mixing parameter $ε^2 > 10^{-10}$. A vector boson with a mass in the 20-220 MeV/c$^2$ range could also mediate interactions between the Standard Model and light thermal dark matter. HPS searches for visible signatures of heavy photons in electroproduction reactions induced on a fixed Tungsten target exploiting the electron beam provided by the JLAB CEBAF machine, which can reach a maximum energy of 12 GeV. These studies of the low mass region complement the exploration of weakly coupled (and possibly new) physics presently performed at the LHC and other high-energy machines.
The HPS search is based on a two-fold approach. First, due to their small coupling to the electric charge, heavy photons should be produced in bremsstrahlung-like processes and could therefore be observed by HPS in their e+e- decay channel, over a huge QED background. Second, HPS can also perform precise decay lengths measurements, which provide information on long-lived bosons featuring small couplings.
After the completion of two engineering runs in 2015 and 2016, HPS is currently in full steam, with the analysis of the datasets collected in 2016, 2019 and 2021 presently ongoing.
In this talk, an overview of the results achieved so far by HPS will be presented.
Today the investigation of dark matter nature, its origin, and the way it interacts with ordinary matter plays a crucial role in fundamental science. Several particle physics experiments at accelerators are searching for hidden particles signals to contribute setting more stringent limits on the characteristics of dark matter.
The Positron Annihilation into Dark Matter Experiment (PADME), ongoing at the Laboratori Nazionali di Frascati of INFN, is looking for hidden particle signals by studying the missing-mass spectrum of single photon final states resulting from positrons annihilation on the electrons of a fixed target. PADME is expected to reach a sensitivity of up to 10$^{-6}$ on $\epsilon^2$ (kinetic mixing coefficient) representing the coupling of a low-mass dark photon (m< 23.7MeV) with ordinary photons.
By measuring the cross-section of the process e$^+$ e$^-$$\rightarrow \gamma \gamma$ at √s=21 MeV and comparing it with SM expectation, it is also possible to set limits on hidden particles decays to photon pairs. In this talk details on the PADME measurement of two-photon annihilation cross-section will be illustrated with its implication to the dark matter studies.
We report on the search visible decays of exotic mediators from data taken in "beam-dump" mode with the NA62 experiment.
The NA62 experiment can be run as a "beam-dump experiment" by removing the Kaon production target and moving the upstream collimators into a "closed" position.
In 2021, more than 10^17 protons on target have been collected in this way during a week-long data-taking campaign by the NA62 experiment.
Using past experience, the upstream beam-line magnets were configured to sizeably reduce background induced by 'halo' muons.
We report on the analysis results of this data, with a particular emphasis on Dark Photon Models.
The search for Dark Matter (DM) is one of the hottest topics of modern physics. Despite the various astrophysical and cosmological observations proving its existence, its elementary properties remain to date unknown. In addition to gravity, DM could interact with ordinary matter through a new force, mediated by a new vector boson (Dark Photon, Heavy Photon or A'), kinetically mixed with the Standard Model photon. The NA64 experiment at CERN fits in this scenario, aiming to produce DM particles using the 100 GeV SPS electron beam impinging on a thick active target (electromagnetic calorimeter). In this setup the DM production signature consists in a large observed missing energy, defined as the difference between the energy of the incoming electron and the energy measured in the calorimeter, coupled with null activity in the downstream veto systems. Recently, following the growing interest in positron annihilation mechanisms for DM production, the the NA64 collaboration has performed preliminary studies with the aim to run the experiment with a positron beam, as planned within the POKER (POsitron resonant annihilation into darK mattER) project.
This talk will present the latest NA64 results and its future prospects, reporting on the progresses in the positron beam run and discussing the sensitivity of the experiment to DM models alternative to the Dark Photon.
BESIII has collected 2.5 billion $\psi(2S)$ events and 10 billion $J/\psi$ events. The huge data
sample provide an excellent chance to search for new physics. We report the search
for the decay $J/\psi\to\gamma + invisible$, which is predicted by next-to-minimal
supersymmetric model. We also report the first search for the invisible decay of
$\Lambda$, which is predicted by the mirror matter model and could explain the
$4\sigma$ discrepancy in neutron lifetime measurement between beam method and bottle
method. A light Higgs $A^0$ is also searched in radiative decay of $J/\psi$
Hidden particles can help explain many important hints for new physics, but the large variety of viable hidden sector models poses a challenge for the model-independent interpretation of hidden particle searches. We present techniques published in 2105.06477 and 2203.02229 that can be used to compute model-independent rates for hidden sector induced transitions. Adapting an effective field theory (EFT) approach, we develop a framework for constructing portal effective theories (PETs) that couple standard model (SM) fields to generic hidden particles. We also propose a method to streamline the computation of hidden particle production rates by factorizing them into i) a model-independent SM contribution, and ii) a observable-independent hidden sector contribution. Showcasing these techniques, we compute a model-independent transition rate for charged kaon decays into a charged lepton and an arbitrary number of hidden particles. By factorizing the rate, a single form factor is found to parametrize the impact of general hidden sectors. This is used to re-interpret an existing search for HNLs in NA62 data, which yields model-independent constraints on the rate of producing arbitrary hidden particles.
The nature of Dark Matter (DM) is one of the greatest puzzles of modern particle physics and cosmology. Dark Matter characterisation requires systematic and consistent approach for DM theory space. We propose a first complete classification of minimal consistent Dark Matter models, which provides the missing link between experiments and top-down models. Consistency is achieved by imposing renormalisability and invariance under the full Standard Model symmetries. We apply this paradigm to fermionic Dark multiplets with up to one mediator. Our work highlights the presence of unexplored viable models, and paves the way for the ultimate systematic hunt for the Dark Matter particle. Based on e-Print: 2203.03660
Dark sectors containing light vectors or scalars may feature sizeable self-interactions between dark matter (DM) particles and are therefore of high phenomenological interest. Self-interacting dark matter appears to reproduce the observed galactic structure better than collisionless DM and may offer a dynamical explanation for the scaling relations governing galactic halos all the way up to clusters of galaxies. On top of being desirable from the phenomenological and observational points of views, the possibility of a richer dark sector, that comprises more than one particle, is fairly common in many DM models.
Furthermore, the existence of light, i.e. with masses much smaller than that of the actual DM particles, mediators may affect the DM dynamics in multiple ways.
Most notably, whenever DM particles are slowly moving with non-relativistic velocities, light mediators can induce bound states in the dark sector in the early universe and/or in the dense environment of present-day haloes. As for the above-threshold states, the effect of repeated mediator exchange manifests itself in the so-called Sommerfeld enhancement for an attractive potential.
In this talk we review state-of-art effective field theories techniques, both at zero and at finite temperature, that allows for the determination of rates that are crucial for an accurate determination of the DM energy density: bound-state formation and dissociation, pair annihilations and bound-state decays. Depending on the model, bound-state effects can lead to a substantial effect and rather different combinations of DM masses and couplings are found to reproduce the observed energy density. This calls for a reassessment of DM phenomenology due to the interplay between the model parameters that fix the relic density and guide the experimental strategies.
We address and discuss various DM models, that comprise the case of QCD colored co-annihilating partners, fermionic and scalar DM with self-interactions mediated by different mediators (scalar, pseudoscalar, vector and axial vector). We explore and report on the present reach of complementary experimental searches, including the LHC and XENON, and future prospects for the DARWIN experiment and Cherenkov Telescope Array (CTA).
Based on the example of the currently widely studied t-channel simplified model with a colored mediator, I will demonstrate the importance of considering non-perturbative effects such as the Sommerfeld effect and bound state formation for accurately predicting the relic abundance and hence correctly inferring the viable model parameters. For instance, I will highlight that the parameter space thought to be excluded by direct detection experiments and LHC searches remains still viable and illustrate that long-lived particle searches and bound-state searches at the LHC can play a crucial role in probing such a model. Finally, I will demonstrate how future direct detection experiments will be able to close almost all of the remaining windows for freeze-out production, making it a highly testable scenario.
New ''dark'' fermionic fields charged under a confining dark group ($\text{SU}(N)$ or $\text{SO}(N)$) can come as embeddings in SU(5) multiplets to explain dark matter (DM). These fermions would form bound states due to the confining nature of the dark gauge group. Such dark baryons could prove to be a good neutral DM candidate stable due to a dark baryon number. DM relic abundance sets the dark confinement scale to be of $\mathcal{O} (100) \text{TeV}$. Previous works require the mass of these baryonic DM-forming light fields $m$, (where $m$ is less than the dark confinement scale $\Lambda_{\text{DC}}$) to be way smaller than the unification scale (GUT scale). This was done assuming that their GUT partners in $\text{SU}(5)$ representations have GUT scale mass. In our work, focusing on the role of these heavy GUT states in these models, we find that the dark fermions cannot come in almost degenerate GUT multiplets.
We further find that cosmological constraints from Big Bang Nucleosynthesis in addition to unification requirements allow for only certain values of masses for these GUT fermions.
However, these mass values give a too large contribution to the DM relic abundance.
To evade this, the mass of the GUT states must be lower than the reheating temperature.
In general, we find that the heavy dark GUT states impact both the cosmological evolution and grand unification. Our study clarifies under which conditions both aspects of the theory are realistic.
We examine the dynamics of quarks and gauge fields in QCD and QED
interactions in the lowest energy states with approximate cylindrical
symmetry, in a flux tube model. Using the action integral, we
separate out the (3+1)D in terms of the transverse and the
longitudinal degrees of freedom and solve the resultant equations of
motion. We find that there may be localized and stable states of QCD
and QED collective $q \bar q$ excitations, showing up as particles whose masses
depend on the QCD and QED coupling constants and the flux-tube radius [1].
Along with known standard stable collective QCD excitations of the
quark-QCD-QED system, there may be stable QED collective $q\bar q$ excitations, which can be
good candidates for the X17 particle [2], the E38 particle [3], and anomalous soft photons [4,5] observed recently in
the region of many tens of MeV, as dicussed in [6].
\vspace*{1.8cm}
\large
[1] A. Koshelkin and C. Y. Wong, {\it
Dynamics of quarks and gauge fields in the lowest-energy states in QCD and QED}, arxiv:2111.14933.
\vspace*{0.3cm}
[2] A. J. Krasznahorkay $et~al.$, {\it Observation of anomalous
internal pair creation in 8Be: a possible indication of a
light, neutral boson}, Phys. Rev. Lett. 116, 042501 (2016),
[arXiv:1504.01527].
\vspace*{0.3cm}
[3] K. Abraamyan, et.al, {\it Check of the structure in photon pairs
spectra at the invariant mass of about 38 MeV}, EPJ Web of
Conferences 204, 08004 (2019).
\vspace*{0.3cm}
[4] A. Belogianni $et~ al.$ (WA102 Collaboration), {\it Observation of a soft photon signal in excess of QED expectations in
$pp$ interactions}, Phys. Lett. B548, 129 (2002).
\vspace*{0.3cm}
[5] J. Abdallah $et~al.$ (DELPHI Collaboration), {\it
Evidence for an excess of soft photons in hadronic decays of Z$^0$},
Eur. Phys. J. C47, 273 (2006), [arXiv:hep-ex/0604038].
\vspace*{0.3cm}
[6]
C. Y. Wong, {\it Open string QED meson description of the X17
particle and dark matter}, JHEP 08 (2020) 165, [arxiv:2001.04864].
We suggest a new class of models – Fermionic Portal Vector Dark Matter (FPVDM) which extends the Standard Model (SM) with $SU(2)_D$ dark gauge sector. While FPVDM does not require kinetic mixing and Higgs portal, It is based on the Vector-Like (VL) fermionic doublet which couples the dark sector with the SM sector through the Yukawa interaction. The FPVDM model provides a vector Dark Matter (DM) with $Z_2$ odd parity ensuring its stability. Multiple realisations are allowed depending on the VL partner and scalar potential. In this talk, we discuss an example of minimal FPVDM realisation with only a VL top partner and no mixing between SM and new scalar sectors. We also present the model implications for DM direct and indirect detection experiments, relic density and collider searches.
The Standard Model effective field theory (SMEFT) is one of the preferred approaches for studying particle physics in the present scenario. The dimension-six SMEFT operators are the most relevant ones and have been studied in various works. The renormalization group evolution equations of these operators are available in the literature and facilitate examining the SMEFT on combined experimental information gathered across different energy scales. But, the dimension-six operators are not the dominant term for all observables, and some of these operators are loop-generated when UV theories are matched to the SMEFT. Also, considering that for relatively low values of the cut-off scale of the SMEFT, contributions from dimension-eight operators cannot be neglected.
In this work, we present the renormalization of the bosonic sector of the dimension-eight operators by tree-level generated dimension-eight operators in the matching of weakly coupled UV theories to the SMEFT. These operators appear in the positivity constraints, which determine the signs of certain combinations of Wilson coefficients based on the unitarity and analyticity of the S-matrix. These constraints are remarkably significant as any experimental evidence of a violation of these constraints would indicate the invalidity of the EFT approach, such as, for example, the existence of lighter degrees of freedom below the cut-off scale of the EFT. Also, these restrictions can be taken into account while defining priors on the fits aiming at constraining the SMEFT parameter space.
Due to large scale separations, matching is an essential and laborious computational step in the comparison of high-energy new physics models to experimental data.
Matchete is a Mathematica package that automates the one-loop matching from any generic ultraviolet (UV) model to a low-energy effective field theory (EFT) including, but not limited to, SMEFT. The program takes a UV Lagrangian as input, integrates out heavy degrees of freedom using functional methods, and returns the EFT Lagrangian. The output is further reduced to a minimal basis using Fierz identities, integration by parts, simplification of Dirac and group structures, and field redefinitions.
After reviewing the theory of functional matching, I will demonstrate the capabilities of the package with a concrete example.
I would like to present an intriguing new perspective into such fundamental questions as 1) the origin of the gauge interactions in the Standard Model (SM), and 2) the origin of the quark, lepton and neutrino families' replication and their fundamental properties experimentally observed in Nature. These questions can be addressed by tying together in a common framework both flavour physics and Grand Unification, which are typically treated on a different footing. Furthermore, I will elaborate on New Physics scenarios that are expected to emerge at phenomenologically relevant energy scales as sub-products of the Trinification-based Flavoured GUT that naturally explain neutrino masses and observed hierarchies in the fermion sectors of the SM as well as the emergence of observed flavour anomalies.
In this talk, we present the construction of Effective Field Theories (EFTs) in which a chiral fermion, charged under both gauge and global symmetries, is integrated out. These symmetries can be spontaneously broken, and the global ones might also be anomalous. This setting is typically served to study the structure of low-energy axion EFTs, where the anomalous global symmetry can be $U(1)_{PQ}$ and the local symmetries can be the SM electroweak chiral gauge symmetries. Spontaneous symmetry breaking will generate Goldstone bosons, and in the meantime, chiral fermions also become massive. In this setup, we emphasise that the derivative couplings of the Goldstone bosons to fermion will lead to severe divergences and ambiguities when evaluating one-loop computations.
In this talk, firstly, we present the Path Integral formalism for building the EFTs resulting from integrating-out massive chiral fermions. Secondly, within this functional formalism, we show how to solve the ambiguities problem by adapting the anomalous Ward identities to the EFT context, and thus enforcing the gauge invariance results. Our methodology provides a generic and consistent neat result when evaluating the Wilson coefficients of EFT operators involving axion and gauge bosons. Finally, we present the application of our technique to axion models and compute non-intuitive couplings between axion and the massive SM gauge fields that arise when decoupling massive chiral fermions.
References: (arXiv: 2112.00553)
Link: https://inspirehep.net/literature/1981947
We present a novel benchmark application of a quantum algorithm to Feynman loop integrals. The two on-shell states of a Feynman propagator are identified with the two states of a qubit and a quantum algorithm is used to unfold the causal singular configurations of multiloop Feynman diagrams. To identify such configurations, we exploit Grover's algorithm for querying multiple solutions over unstructured datasets, introducing a suitable modification to deal with topologies in which the number of causal states to be identified is nearly half of the total number of states. The output of the quantum algorithm in IBM Quantum and QUTE Testbed simulators is used to bootstrap the causal representation in the loop-tree duality of representative multiloop topologies.
We consider Nielsen-Olesen vortices (abelian Higgs model in $2+1$ dimensions) under Einstein gravity in an AdS$_3$ background. We find numerically non-singular solutions characterized by three parameters: the cosmological constant $\Lambda$, the winding number $n$ and the vacuum expectation value (VEV) labeled by $v$. The mass (ADM mass) of the vortex is expressed in two ways: one involves subtracting the value of two metrics asymptotically and the other is expressed as an integral over matter fields. The latter shows that the mass has an approximately $n^2 v^2$ dependence and our numerical results corroborate this. We observe that as the magnitude of the cosmological constant increases the core of the vortex becomes slightly smaller and the mass increases. We then embed the vortex under gravity in a Minkowski background and obtain numerical solutions for different values of Newton's constant. There is a smooth transition from the non-singular origin to an asymptotic conical spacetime with angular deficit that increases as Newton's constant increases. We end by stating that the well-known logarithmic divergence in the energy of the vortex in the absence of gauge fields can be seen in a new light with gravity: it shows up in the metric as a $2+1$ Newtonian logarithmic potential leading to a divergent ADM mass.
In heavy ion collisions, the quark gluon plasma, a new state of matter where quarks and gluons are no longer confined in a nucleus, is created. High energy partons created during the initial collision are observed to lose energy though interactions with the plasma. The details of how the energy is transported away from the partons is not fully understood and of great interest. Jet spectra measurement with different resolution parameters is one of the simplest observables and yet provides highly nontrivial insight. In this talk, we report new results on the inclusive jet spectra from CMS with the latest high statistics data, including the results on anti-kT jet spectra spanning the widest range of resolution parameters ever employed in heavy-ion collisions. The accuracy of the result is greatly improved compared to the previous publication on large area jets up to R = 1.0. These results shed light on the different mechanisms of parton interactions with the medium.
Several new features have been recently observed in high-multiplicity small collision systems that are reminiscent of the observations attributed to the creation of a quark-gluon plasma, QGP, in Pb-Pb collisions. These include long-range angular correlations on the near and away side of two-particle correlations, non-vanishing second order Fourier coefficients in multiparticle cumulant studies, and the baryon-to-meson ratio enhancement in high-multiplicity pp and p-Pb collisions. However, jet quenching effects in small systems have not yet been observed, and quantifying or setting limits on the magnitude of jet quenching in small systems is a key element in understanding the limits of QGP formation. In this talk we present a search for jet quenching effects in pp collisions as function of event multiplicity based on on two jet observables: inclusive $p_\mathrm{T}$-differential jet cross sections, and the semi-inclusive yield of jets recoiling from a high-$p_\mathrm{T}$ hadron. Both measurements are carried out differentially in event multiplicity, which varies the size of the collision system. Jets are reconstructed from charged particles using the anti-$k_\mathrm{T}$ algorithm, the $R$-dependent inclusive jet cross section is compared to pQCD calculations. To search for jet quenching effects, the shape of the inclusive jet yield in different multiplicity intervals is compared to the one obtained in minimum bias (MB) events. The jet yield increases as a function of charged-particle multiplicity, which is similar to the one observed from soft sectors based on transverse spherocity. In the semi-inclusive analysis, the recoil jet acoplanarity distributions are measured in high multiplicity (HM) events and MB events. The acoplanarity distributions in HM events exhibit a marked suppression and broadening when compared to the corresponding distributions obtained from MB events. Its origin is elucidated by comparison to model calculations, with potential implications for the larger LHC small-systems program.
In this work, we introduce both gluon and quark degrees of freedom for describing the partonic cascades inside the medium. We present numerical solutions for the set of coupled evolution equations with splitting kernels calculated for the static, exponential and Bjorken expanding media to arrive at medium-modified parton spectra for quark and gluon initiated jets respectively. We discuss novel scaling features of the partonic spectra between different types of media. Next, we study the inclusive jet $𝑅_{𝐴𝐴}$ by including phenomenologically driven combinations of quark and gluon fractions inside a jet. In addition, we have also studied the effect of the nPDF as well as vacuum like emissions on the jet $𝑅_{𝐴𝐴}$. Differences among the estimated values of quenching parameter for different types of medium expansions are noted. Next, the impact of the expansion of the medium on the rapidity dependence of the jet $𝑅_{𝐴𝐴}$ as well as jet $v_2$ are studied in detail. Finally, we present qualitative results comparing the sensitivity of the time for the onset of the quenching for the Bjorken profile on these observables. All the quantities calculated are compared with the recent ATLAS data.
The sPHENIX detector at the BNL Relativistic Heavy Ion Collider (RHIC) is currently under construction and on schedule for first data in early 2023. Built around the BaBar superconducting solenoid, the central detector consists of a silicon pixel vertexer, a silicon strip detector with single event timing resolution, a compact TPC, novel EM calorimetry, and two layers of hadronic calorimetry. The plan is to use the combination of electromagnetic calorimetry, hermetic hadronic calorimetry, precision tracking, and the ability to record data at high rates without trigger bias to make precision measurements of Heavy Flavor, Upsilon and jets to probe of the Quark Gluon Plasma (QGP) formed in heavy-ion collisions. These measurements will have a kinematic reach that not only overlaps those performed at the LHC, but extends them into a new, low-pT regime. sPHENIX will significantly expand the observables and kinematic reach of these measurements at RHIC and provide a comparison with the LHC measurements in the overlapping kinematic region. The physics program, its potential impact, and recent detector development will be discussed in this talk.
The LHeC and the FCC-he will measure DIS cross sections and the partonic structure of protons and nuclei in an unprecedented range of small $𝑥$. In this kinematic region the non-linear dynamics expected in the high energy regime of QCD should be relevant in a region of small coupling. In this talk we will demonstrate the unique capability of these high-energy colliders for unravelling dynamics beyond fixed-order perturbation theory, proving the non-linear regime of QCD, saturation, to exist (or to disprove). This is enabled through the simultaneous measurements, of similar high precision and range, of $𝑒𝑝$ and $e$A collisions which will eventually disentangle nonlinear parton-parton interactions from nuclear environment effects.
Reference: P. Agostini et al. (LHeC Study Group), The Large Hadron-Electron Collider at the HL-LHC, J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
Automated perturbative computations of cross sections for hard processes in asymmetric hadronic/nuclear $A+B$ collisions at the next-to-leading (NLO) order in $\alpha_s$ will offer a wide range of applications, such as more robust predictions for new experimental programs, the phenomenology of heavy-ion collisions, and the interpretation of the LHC and RHIC data. Such a goal can be achieved using MadGraph5_aMC@NLO [1], a well-established tool for automatic generation of matrix elements and event generation for high energy physics processes in elementary collisions, such as decays and ${2\rightarrow n}$ scatterings.
We have extended the capabilities of MadGraph5_aMC@NLO capabilities by implementing computations for asymmetric collisions, for example $p+Pb$, $\pi+Al$ or $Pb+W$ reactions. These new capabilities will soon be made available via the EU Virtual Access NLOAccess (https://nloaccess.in2p3.fr).
In my talk, I will present the objectives of the NLOAccess initiative, the implementation of asymmetric computation computations in MadGraph5_aMC@NLO along with the computation of the nuclear PDF and scale uncertainties, our cross checks with previous results and codes (e.g. Helac-Onia [2], FEWZ [3,4]), and predictions for $p+Pb$ collisions at the LHC for charm, bottom and top quark production, as well as fancier observables now made predictable with these new capabilities.
References:
[1] J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, “The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations,” JHEP 07 (2014) 079, arXiv:1405.0301 [hep-ph].
[2] H.-S. Shao, HELAC-Onia 2.0: an upgraded matrix-element and event generator for heavy quarkonium physics, Comput. Phys. Commun. 198, 238 (2016),
doi:10.1016/j.cpc.2015.09.011, 1507.03435.
[3] R. Gavin, Y. Li, F. Petriello and S. Quackenbush, FEWZ 2.0: A code for hadronic Z production at next-to-next-to-leading order, Comput. Phys. Commun. 182 (2011) 2388 [1011.3540].
[4] Seth Quackenbush, Ryan Gavin, Ye Li, Frank Petriello, W physics at the LHC with FEWZ 2.1, Computer Physics Communications, Volume 184, Issue 1, 2013, Pages 209-214, ISSN 0010-4655, https://doi.org/10.1016/j.cpc.2012.09.005.
We discuss measurements of the CP properties of the Higgs boson with the CMS detector, both exploiting Higgs boson production and its decay, as well as searches for non-standard-model CP contributions, and anomalous couplings in general.
Studies of the CP properties of the Higgs boson in various production modes and decay channels are presented. Limits on the mixing of CP-even and CP-odd Higgs states are set by exploiting the properties of diverse final states.
This talk presents the most recent measurements of Higgs boson mass and width by the ATLAS experiment exploiting the Higgs boson decays into two photons or four leptons, and using the full Run 2 dataset of pp collisions collected at 13 TeV at the LHC.
With the data collected in Run-2, the Higgs boson can be studied in several production processes using a wide range of decay modes. Combining data in these different channels provides a broad picture of the Higgs boson coupling strengths to SM particles. This talk will cover the latest combination of Higgs boson production and decay modes at CMS to measure the Higgs boson couplings.
With the full Run 2 pp collision dataset collected at 13 TeV, very detailed measurements of Higgs boson coupling properties can be performed using a variety of final states, identifying several production modes and its decays into bosons and fermions and probing different regions of the phase space with increasing precision. These measurements can then be combined to exploit the strengths of each channel, thus proving the most stringent global measurement of the Higgs coupling properties. This talk presents the latest combination of Higgs boson coupling measurements by the ATLAS experiment, discussing results in term of production modes, branching fractions and Simplified Template Cross Sections, as well as their interpretations in the framework of kappa modifiers to the strength of the various coupling and decay properties.
With the full Run 2 pp collision dataset collected at 13 TeV by the ATLAS experiment, it is now possible to perform detailed measurements of Higgs boson properties in many production and decay modes. In many cases, novel experimental techniques were developed to allow for these measurements. This talk presents a review of a representative selection of such novel techniques, including: embedding of simulated objects in data; special object weighting techniques to maximize statistical precision; developing special trigger, reconstruction, and identification algorithms for non-standard objects; special treatments of sources of two-point theory systematic uncertainties; special developments in likelihood-based fitting techniques; various innovative machine-learning approaches.
We assess the performance of different jet-clustering algorithms, in the presence of different resolution parameters and reconstruction procedures, in resolving fully hadronic final states emerging from the chain decay of the discovered Higgs boson into pairs of new identical Higgs states, the latter in turn decaying into bottom-antibottom quark pairs. We show that, at the Large Hadron Collider (LHC), both the efficiency of selecting the multi-jet final state and the ability to reconstruct from it the masses of the Higgs bosons (potentially) present in an event sample depend strongly on the choice of acceptance cuts, jet-clustering algorithm as well as its settings. Hence, we indicate the optimal choice of the latter for the purpose of establishing such a benchmark Beyond the SM (BSM) signal. We then repeat the exercise for a heavy Higgs boson cascading into two SM-like Higgs states, obtaining similar results.
Three mysteries stand after the discovery of the Higgs boson: (i) the origin of the masses of the neutrinos; (ii) the origin of the baryon asymmetry in the universe; and (iii) the nature of dark matter. The FCC-ee provides an exciting opportunity to solve these mysteries with the discovery of heavy neutral leptons (HNLs, or N), in particular using the large sample ($5\cdot 10^{12}$) Z bosons produced in early running at the Z resonance using the production process e+e- → Z → vN. The expected very small mixing between light and heavy neutrinos leads to very small mixing angles, resulting in very long lifetimes for the HNL and in spectacular signal topology. Although the final state in this reaction appears to be charge-insensitive, it is nevertheless possible to distinguish the Dirac vs Majorana nature of the neutrinos, by a variety of methods that will be discussed. A Majorana nature could have considerable implication for the generation of the Baryon Asymmetry of the Universe.
Accelerator-based neutrino experiments require precise understanding of their neutrino flux, which originates from meson decays in flight. These mesons are produced in hadron-nucleus interactions in extended targets. The cross-sections of the primary and secondary hadronic processes involved are generally poorly measured, and as a result hadron production is the leading systematic uncertainty source on neutrino flux prediction at all major experimental neutrino facilities. The NA61/SHINE multi-particle spectrometer at the CERN SPS has a dedicated program to make precise measurements of hadron production processes for neutrino beams, and has taken data on processes important for both T2K and the Fermilab long-baseline neutrino program. This talk will present the newest measurements of hadron production cross-sections at multiple energies and targets, as well as more specialized measurements using replicas of neutrino beam production targets. NA61/SHINE is completing a major detector upgrade, and physics measurements will begin in June 2022; over the next four years NA61/SHINE will perform a new program of measurements dedicated to neutrino physics including the production of mesons from a replica of the LBNF/DUNE target. Finally, a possible new low-energy beam facility for NA61/SHINE and its physics program will be discussed.
The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is a Gadolinium-loaded water Cherenkov detector located in the Booster Neutrino Beam at Fermilab. One of its primary physics goals is to measure the final state neutron multiplicity of neutrino-nucleus interactions. This measurement of the neutron yield as a function of the outgoing lepton kinematics will be useful to constrain systematic uncertainties and reduce biases in future long-baseline oscillation and cross-section experiments. ANNIE is also a testbed for innovative new detection technologies. It will make use of pioneering photodetectors called Large Area Picosecond Photodetectors (LAPPDs) with better than 100 picosecond time resolution, which will enhance its reconstruction capabilities and demonstrate the feasibility of this technology as a new tool in high energy physics. This talk will present the status of the experiment in terms of the overall progress, the deployment of the first LAPPD and an overview of recently taken beam and calibration data. Additional future R&D efforts and analysis opportunities involving the use of the novel detection medium of Water-based Liquid Scintillators will be briefly highlighted.
The main source of systematic uncertainty on neutrino cross section measurements at the GeV scale originates from the poor knowledge of the initial flux. The goal of cutting down this uncertainty to 1% can be achieved through the monitoring of charged leptons produced in association with neutrinos, by properly instrumenting the decay region of a conventional narrow-band neutrino beam. Large angle muons and positrons from kaons are measured by a sampling calorimeter on the decay tunnel walls (tagger), while muon stations after the hadron dump can be used to monitor the neutrino component from pion decays. This instrumentation can provide a full control on both the muon and electron neutrino fluxes at all energies. Furthermore, the narrow momentum width (<10%) of the beam provides a $\mathcal{0}$(10%) measurement of the neutrino energy on an event by event basis, thanks to its correlation with the radial position of the interaction at the neutrino detector. The ENUBET project has been funded by the ERC in 2016 to prove the feasibility of such a monitored neutrino beam and, since 2019, ENUBET is a CERN neutrino platform experiment (NP06/ENUBET).
ENUBET is going to present the final results of the ERC project in ICHEP, together with the complete assessment of the feasibility of its concept. The breakthrough the project achieved is the design of a horn-less beamline that allows for a 1% measurement of $\nu_e$ and $\nu_{\mu}$ cross sections in about 3 years of data taking at CERN-SPS using ProtoDUNE as far detector. Thanks to the replacement of the horn with a static focusing system (2 s proton extraction) we reduce the pile up by two orders of magnitude, and we can monitor positrons from kaons plus muons from pion and kaon decays with a signal/background >2.
A full Geant4 simulation of the facility is employed to assess the final systematics budget on the neutrino fluxes with an extended likelihood fit of a model where the hadro-production, beamline geometry and detector-related uncertainties are parametrized by nuisance parameters. In parallel the collaboration is building a section of the decay tunnel instrumentation ("demonstrator", 1.65m in length, 7 ton mass) that will be exposed to the T9 particle beam at CERN-PS in autumn 2022, for a final validation of the detector performance.
The ENUBET design is such that the same sensitivity can be achieved using the proton accelerators available at FNAL using ICARUS as neutrino detector. The technology of a monitored neutrino beam has been proven to be feasible and cost-effective (the instrumentation contributes to about 10% of the cost of the conventional neutrino beam), and the complexity does not exceed significantly the one of standard short-baseline beams. ENUBET will thus play an important role in the systematic reduction programme of future long baseline experiments, thus enhancing the physics reach of DUNE and HyperKamiokande. In our contribution, we summarize the ENUBET design, physics performance and opportunities for its implementation in a timescale comparable with next long baseline neutrino experiments.
The DsTau experiment at CERN-SPS has been proposed to measure an inclusive differential cross-section of a Ds production with a consecutive decay to tau lepton in p-A interactions. A precise measurement of the tau neutrino cross section would enable a search for new physics effects such as testing the Lepton Universality (LU) of Standard Model in neutrino interactions. The detector is based on nuclear emulsion providing a sub-micron spatial resolution for the detection of short length and small “kink” decays. Therefore, it is very suitable to search for peculiar decay topologies (“double kink”) of Ds→τ→X. In 2021, the first physics run of the experiment was performed successfully. The collected data corresponds to 30% of the aimed total statistics. In this presentation, the status of data taking and analysis will be presented.
The Deep Underground Neutrino Experiment (DUNE) is a next generation long baseline neutrino experiment for oscillation physics and proton decay studies. The primary physics goals of the DUNE experiment are to perform neutrino oscillation physics studies, search for proton decay, detect supernova burst neutrinos, make solar neutrino measurements and BSM searches. The liquid argon prototype detectors at CERN (ProtoDUNE) are a test-bed for DUNEs far detectors. It is a 700 ton liquid argon time projection chamber (LArTPC) that has operated for over 2 years, to inform the construction and operation of the first two and possibly subsequent 17-kt DUNE far detector LArTPC modules. Here we introduce the DUNE and protoDUNE experiments and physics goals as well as discussing recent progress and results.
DUNE will be a next-generation experiment aiming to provide precision measurements of the neutrino oscillation parameters. It will detect neutrinos generated in the LBNF beamline at Fermilab, using a Near Detector (ND) situated near the beam target where the neutrinos originate and a Far Detector (FD) located 1300 km away in South Dakota. A comparison of the spectra of neutrinos measured at the FD and the ND will allow for the extraction of oscillation probabilities from which the oscillation parameters can be inferred. The specific role of the ND will be to serve as the experiment’s control: it will establish the no oscillation null hypothesis, measure and monitor the beam, constrain systematic uncertainties, and provide essential measurements of the neutrino interactions to improve models. The ND complex will include three primary detector components: a liquid argon TPC called ND-LAr, a high-pressure gas TPC called ND-GAr and an on-axis beam monitor called SAND. The three detectors will serve important individual and overlapping functions, with ND-LAr and ND-GAr also able to move transverse to the beam’s axis via the DUNE-PRISM program. The overall mission of the ND, as well as the three sub-detectors’ unique capabilities and physics programs will be discussed during this talk, including the Beyond Standard Model physics searches that can be undertaken with the detectors at the near site.
The CMS Collaboration is preparing to replace its endcap calorimeters for the HL-LHC era with a high-granularity calorimeter (HGCAL). The HGCAL will have fine segmentation in both the transverse and longitudinal directions, and will be the first such calorimeter specifically optimized for particle-flow reconstruction to operate at a colliding-beam experiment. The proposed design uses silicon sensors as active material in the regions of highest radiation and plastic scintillator tiles equipped with on-tile silicon photomultipliers (SiPMs), in the less-challenging regions. The unprecedented transverse and longitudinal segmentation facilitates particle identification, particle-flow reconstruction and pileup rejection. We will overview some of the novel reconstruction methods being explored. As part of the ongoing development and testing phase of the HGCAL, prototypes of both the silicon and scintillator-based calorimeter sections have been tested in 2018 in beams at CERN. We report on the performance of the prototype detectors in terms of stability of noise and pedestals, MIP calibration, longitudinal/lateral shower shapes, precision timing, as well as energy linearity and resolution for electrons and pions. We compare the measurements with a detailed GEANT4 simulation. We also report on beam tests of the scintillator-based section at DESY in 2020 and 2021.
A new era of hadron collisions will start around 2028 with the High-Luminosity LHC, that will allow to collect ten times more data that what has been collected so far at the LHC. This is possible thanks to a higher instantaneous luminosity and higher number of collisions per bunch crossing.
To meet the new trigger and data acquisition requirements and withstand the high expected radiation doses at the High-Luminosity LHC, the ATLAS Liquid Argon Calorimeter readout electronics will be upgraded. The triangular calorimeter signals are amplified and shaped by analogue electronics over a dynamic range of 16 bits, with low noise and excellent linearity. Developments of low-power preamplifiers and shapers to meet these requirements are ongoing in 130nm CMOS technology. In order to digitize the analogue signals on two gains after shaping, a radiation-hard, low-power 40 MHz 14-bit ADCs is being developed using a pipeline+SAR architecture in 65nm CMOS. The characterization of the prototypes of these on-detector components is promising and will likely fulfill all the requirements.
The signals will be sent at 40MHz to the off-detector electronics, where FPGAs connected through high-speed links will perform energy and time reconstruction through the application of corrections and digital filtering. Reduced data are then sent with low latency to the first-level trigger-system, while the full data are buffered until the reception of the trigger decision signal. If an event is triggered, the full data is sent to the ATLAS readout system. The data-processing, control, and timing functions will be realized with dedicated boards using the ATCA technology.
The results of tests of prototypes of the on-detector components will be presented. The design of the off-detector boards along with the performance of the first prototypes will be discussed. In addition, the architecture of the firmware and processing algorithms will be shown.
The High Luminosity upgrade of the LHC (HL-LHC) at CERN will provide unprecedented instantaneous and integrated luminosities of around 5 x 10^34 cm-2 s-1 and 3000/fb, respectively. An average of 140 to 200 collisions per bunch-crossing (pileup) is expected. In the barrel region of the Compact Muon Solenoid (CMS) electromagnetic calorimeter (ECAL), the lead tungstate crystals and avalanche photodiodes (APDs) will continue to perform well, while the entire readout and trigger electronics will be replaced. The noise increase in the APDs, due to radiation-induced dark current, will be mitigated by reducing the ECAL operating temperature. The trigger decision will be moved off-detector and performed by powerful and flexible FPGA processors.
The upgraded ECAL will greatly improve the time resolution for photons and electrons with energies above 10 GeV. Together with the introduction of a new timing detector designed to perform measurements with a resolution of a few tens of picoseconds for minimum ionizing particles, the CMS detector will be able to precisely reconstruct the primary interaction vertex under the described pileup conditions.
We present the status of the ECAL barrel upgrade, including time resolution results from beam tests conducted during 2018 and 2021 at the CERN SPS.
The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the central region of the ATLAS experiment. It is a sampling calorimeter with steel as absorber and scintillators as active medium. The scintillators are readout by wavelength shifting fibers coupled to photomultiplier tubes (PMTs). The TileCal response and its readout electronics are monitored to better than 1% using radioactive source, laser and charge injection systems.
Both the on- and off-detector TileCal electronics will undergo significant upgrades in preparation for the high luminosity phase of the LHC (HL-LHC) expected to begin in 2029 so that the system can cope with the HL-LHC increased radiation levels and out-of-time pileup and can meet the requirements of a 1 MHz trigger.
PMT signals from every TileCal cell will be digitized and sent directly to the back-end electronics, where the signals are reconstructed, stored, and sent to the first level of trigger at a rate of 40 MHz. This improved readout architecture allows more complex trigger algorithms to be developed.
The TileCal system design for the HL-LHC results from a long R&D program cross-validated by test beam studies and a demonstrator module. This module has reverse compatibility with the existing system and was inserted in ATLAS in August 2019 to test current detector conditions. The new design was tested with a beam of particles in 2021 at CERN SPS.
The main features of the TileCal upgrade program and results obtained from the Demonstrator tests and test beam campaigns will be discussed.
Within the upgrade program of the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) for the HL-LHC data taking, the installation of a new timing layer to measure the time of minimum ionizing particles (MIPs) with a time resolution of ~30-40 ps is planned. The time information of the tracks from this new MIP Timing Detector (MTD) will improve the rejection of spurious tracks and vertices arising from the expected harsh pile-up conditions from machine operation. At the same time this detector will provide particle identification capabilities based on the time-of-flight, and will bring unique physics opportunities for interesting signatures such as those including long-lived particles. An overview of these possibilities is given, using the state of the art of the simulation and reconstruction of the MTD detector.
The LHC luminosity will significantly increase in the coming years. Many of the current detectors in different subsystems need to be replaced or upgraded. The new ones should be capable not only to cope with the high particle rate, but also to provide improved time information to reduce the data ambiguity due to the expected high pileup. The CMS collaboration have shown that the new improved RPCs, using smaller gas gap (1.4 mm) and low-resistivity High Pressure Laminate, can stand rates up to 2 kHz/cm2. They are equipped with new electronics sensitive to low signal charges. This electronics was developed to read out the RPC detectors from both sides of a strip and, using timing information, to identify the position along it. The excellent relative resolution of ~200 ps leads to a space resolution of few cm. The absolute time measurement, determined by RPC signal around 500 ps, will also reduce the data ambiguity due to the highly expected pileup at the Level 1 trigger. 4 demonstrator chambers have just been installed in the CMS cavern. These chambers were qualified in test beams at Gamma Irradiation Facility (GIF), located on one of the SPS beam lines at CERN. This talk will present the results of the tests done in GIF, as well as the brand new results from commissioning at CMS.
The study of CP violation patterns across the phase space of multibody charmless B decays is of great interest as it brings information on the dynamics of residual strong interactions between quarks in the initial and final states of the decay. Understanding this dynamics is fundamental to distinguish between QCD effects and potential contributions from physics beyond the standard model. In this work we present the most recent measurements of CP violation in multibody charmless B decays at LHCb.
Measurements of decay-time dependent CP violation are chief goals of the Belle II physics program. Comparison between penguin-dominated $b\to q\bar{q}s$ and tree-dominated $b\to c\bar{c}s$ results allows for stringent tests of CKM unitarity that are sensitive to non-SM physics. This talk present first Belle II results on the mixing rate and lifetime of $B^0$ mesons, an essential validation of time-dependent measurements that requires detailed control of complex high-level capabilities such as flavor tagging and decay-time resolution modeling. Recent results on $B^0\to K^0_S\pi^0\gamma$ and $B^0\to K_S^0K_S^0K_S^0$ are also reported.
Outstanding vertexing performance and low-background environment are key enablers of a systematic Belle II program targeted at measurements of charm hadron lifetimes Recent results from measurements of $D^0$ meson, $D^+$ meson and $\Lambda_c$ baryon lifetimes are presented. The results are the most precise to date.
BESIII has collected 2.93 and 6.32 $fb^-1$ of $e^+e^-$ collision data samples at 3.773 and
4.178-4.226 GeV, respectively. We will report precision measurements of $f_{Ds}$, $|V_{cs}|$
and test of lepton flavor universality by studying the leptonic decays of $D_s -> l^+ \nu$
with $\tau^+ -> \rho^+ \nu, \pi^+ \nu$, and $e^+ \nu \nu$. We will also report the observation of
semileptonic decay of $D^0 -> \rho^-\mu^+\nu$ and lepton flavor universality test, and the
studies of some other semileptonic decays, such as $D_s -> \pi^0\pi^0e^+\nu$ and $K_SK_Se^+\nu$.
BESIII has collected 4.4 $fb^{-1}$ of $e^+e^-$ collision data between 4.6 and 4.7 GeV. This unique data offers ideal oppurtunity to determine absolute branching fractions of
$\Lambda_c^+$ decays. We will report the first observation of $\Lambda_c^+ -> n \pi^+$.
Meanwhile, we will report prospect on the studies of semileptonic and the other
hadronic decays of $\Lambda_c^+$ in the near future.
The Cabibbo-Kobayashi-Maskawa (CKM) mechanism predicts that a single parameter must be responsible for CP-violating phenomena in different quark sectors of the Standard Model (SM). Despite this minimal picture, challenged by non-SM physics, the CKM mechanism has been so-far verified in the bottom and strange sectors, but lacks tests in the complementary charm sector. For the sake of this, urgent theoretical progress is needed in order to provide an estimate in the SM of the recent measurement by LHCb of direct CP-violation in charm-meson two-body decays, which will be largely improved by new data expected along this decade from LHCb and Belle II. Re-scattering effects are particularly relevant for a meaningful theoretical account of the amplitudes involved in such observable, as signaled by the presence of large strong phases. I discuss the computation of the latter effects based on dispersion relations, and perform a global fit combination with the CKMfitter statistical package of available data on branching ratios and CP-asymmetries in order to assess the size of CP-violating contributions in the SM to charm-meson decays into $\pi \pi$ and $K K$.
In Lattice gauge theories , to calculate the PDFs from first principles
it is convenient to consider the Ioffe-time distribution defined through gauge-invariant bi-local operators with spacelike separation. Lattice calculations provide values for a limited range of the distance separating the bi-local operators. In order to perform the Fourier transform and obtain the pseudo- and the quasi-PDFs, it is then necessary to extrapolate the large-distance behavior.
I will discuss the formalism one may use to study the behavior of the Ioffe-time distribution at large distances and show that the pseudo-PDF and quasi-PDF are very different at this regime. Using light-ray operators, I will also show that the higher twist corrections of the quasi-PDF come in not as inverse powers of $P$ but as inverse powers of $x_B P$.
The measurement of neutral mesons in pp collisions allows a test of perturbative QCD calculations and represents an important baseline for heavy-ion studies. Neutral mesons are reconstructed in ALICE with multiple methods in a very wide range of transverse momenta and thus impose restrictions on the parton distribution functions and fragmentation functions over a wide kinematic region. Moreover, observations in high-multiplicity pp and p-Pb collisions show surprising similarities with those in heavy-ion collisions. Measured identified particle spectra in hard pp collisions give further insight into the hadron chemistry in such high charged-particle multiplicity events.
In this talk, detailed measurements of the neutral pion, eta and omega mesons will be presented in several multiplicity classes in pp collisions at $\sqrt{s}$ = 13 TeV. The different analysis techniques using two different calorimeters and the reconstruction of conversion photons via their $e^{+}e^{-}$ pairs will be briefly explained. In particular, the inclusion of the merged photon clusters analysis using the calorimeter allows the extension of the neutral pion measurement up to an unprecedented high $p_{\rm T}$ of 200 GeV/$c$ in pp and p-Pb collisions for identified hadron spectra. Results will be compared to pQCD calculations.
In this contribution, we present the latest measurements of $\mathrm{D}^0$, $\mathrm{D}^+$ and $\mathrm{D_s}^+$ mesons together with the final measurements of $\Lambda_\mathrm{c}^+$, $\Xi_\mathrm{c}^{0,+}$, $\Sigma_\mathrm{c}^{0,++}$, and the first measurement of $\Omega_\mathrm{c}^0$ baryons performed with the ALICE detector at midrapidity in pp collisions at $\sqrt{s}=5.02$ and $\sqrt{s}=13$ TeV. Recent measurements of charm-baryon production at midrapidity in small systems show a baryon-to-meson ratio significantly higher than that in $\mathrm{e^+e^-}$ collisions, suggesting that the fragmentation of charm is not universal across different collision systems. Thus, measurements of charm-baryon production are crucial to study the charm quark hadronization in a partonic rich environment like the one produced in pp collisions at the LHC energies.
Furthermore, the recent $\Lambda_\mathrm{c}^+/\mathrm{D}^0$ yield ratio, measured down to $p_\mathrm{T}=0$, and the new $\Xi_\mathrm{c}^{0,+}/\mathrm{D}^0$ yield ratio in p-Pb collisions will be discussed. The measurement of charm baryons in p-nucleus collisions provides important information about possible additional modification of hadronization mechanisms as well as on Cold Nuclear Matter effects and on the possible presence of collective effects that could modify the production of heavy-flavour hadrons.
Finally, the first measurements of charm fragmentation fractions and charm production cross-section at midrapidity per unit of rapidity will be shown for both pp and p-Pb collisions using all measured single charm ground state hadrons.
I will discuss nonperturbative flavor correlations between pairs of leading and next-to-leading charged hadrons within jets at the Electron-Ion Collider (EIC). We introduce a charge correlation ratio observable $r_c$ that distinguishes same- and opposite-sign charged pairs. Using Monte Carlo simulations with different event generators, $r_c$ is examined as a function of various kinematic variables for different combinations of hadron species, and the feasibility of such measurements at the EIC is demonstrated. I will also discuss the correlation between leading hadrons and leading subjets which encodes the transition between perturbative and nonperturbative regimes. The precision hadronization study we propose will provide new tests of hadronization models and hopefully lead to improved quantitative, and perhaps eventually analytic, understanding of nonperturbative QCD dynamics.
The observation of 3-Jpsi production in a single pp collision is reported. The results are based on the data collected by the CMS experiment in 13 TeV pp collisions. The measured effective double parton scattering cross section is compared to the previous measurements.
The LHCb experiment at the LHC is suited for studying how hadrons are formed from scattered quarks and gluons, in energetic proton-proton collisions. The hadronization and fragmentation processes can be studied via measurements such as those involving jet substructure. Equipped with a forward spectrometer, the LHCb experiment achieves an excellent transverse momentum for charged tracks, that along with excellent particle identification capabilities offers a unique opportunity to measure with great precision hadronization variables. This talk will present measurements of identified hadrons within light quark-initiated jets as well as other ongoing QCD measurements at LHCb.
In many BSM theories the top quark is hypothesized to have an enhanced non-standard or extremely rare interaction with other SM particles. This presentation covers the latest CMS results in this regard, either from direct searches or precise measurements, including flavor-changing neutral current (FCNC) and tests of discrete symmetries with top quark.
The large integrated luminosity collected by the ATLAS detector at the highest proton-proton collision energy provided by LHC allows to probe the presence of new physics that could enhance the rate of rare processes in the SM. The LHC can therefore gain considerable sensitivity for Flavour Changing Neutral Current (FCNC) interactions of the top quark. In the SM, FCNC involving the top-quark decay to another up-type quark and a neutral boson are so small that any measurable branching ratio for such a decay is an indication of new physics. The ATLAS experiment has performed searches for FCNC couplings of the top quark with a photon, gluon, Z boson or Higgs boson. In this contribution, the most recent results are presented, which include the complete data set of 140/fb at 13 TeV collected at the LHC during run 2 (2015-2018). The large data set, together with improvements in the analysis, yields a strong improvement of the expected sensitivity compared to previous experiments and partial analyses of the LHC data.
KKMChh adapts the CEEX (Coherent Exclusive Exponentiation) of the Monte Carlo Program KKMC for Z boson production and decay to hadron scattering. Amplitude-level soft photon exponentiation of initial and final state radiation, together with initial-final interference, is matched to a perturbative calculation to second order next-to-leading logarithm, and electroweak corrections to the hard process are included via DIZET. The first release of KKMChh included complete initial state radiation calculated with current quark masses. This version assumes idealized pure-QCD PDFs with negligible QED contamination. Traditional PDFs neglect QED evolution but are not necessarily free of QED influence in the data. QED-corrected PDFs provide a firmer starting point for precision QED work. We describe a new procedure for matching KKMChh's initial state radiation to a QED-corrected PDF, and compare this to earlier approaches. Some phenomenological applications are described.
The weak mixing angle is a probe of the vector-axial coupling structure of electroweak interactions. It has been measured precisely at the Z-pole by experiments at the LEP and SLD colliders, but its energy dependence above $m_Z$ remains unconstrained.
In this contribution we propose to exploit measurements of Neutral-Current Drell-Yan production at the Large Hadron Collider at large invariant dilepton masses to determine the scale dependence of the weak mixing angle in the MSbar renormalisation scheme, $sin^2\theta_W(\mu)$.
Such a measurement can be used to confirm the Standard Model predictions for the MSbar running at TeV scales, and to set model-independent constraints on new states with electroweak quantum numbers.
To this end, we present an implementation of $sin^2\theta_W(\mu)$ in a Monte Carlo event generator in Powheg-Box, which we use to explore the potential of future dedicated analyses with the LHC Run3 and High-Luminosity datasets.
In particular, we study the impact of higher order electroweak corrections and of uncertainties due the knowledge of parton distribution functions.
In this talk, we present the analytic evaluation of the virtual corrections
to the di-muon production in electron-positron collision in QED, up to the second order in fine structure constant, retaining the full dependence on
the muon mass and considering the electron as a massless particle.
We discuss the computational details, and the high-level of automation it required,
from the diagram generation, to the amplitude decomposition, and to the evaluation of the master integrals,
along with the UV renormalization and the IR singularity structure.
We also present preliminary results on:
i) a crossing related process, such as the two-loop amplitude for muon-electron scattering in QED, relevant for the MUonE experiment;
ii) the extension to the process qqbar -> ttbar in QCD.
For both the FCC-ee and the ILC, to exploit properly the respective precision physics program, the theoretical precision tag on the respective luminosity will need to be improved from the analogs of the 0.054 % (0.061%) results at LEP at $M_Z$, where the former (latter) LEP result has (does not have) the pairs correction. At the FCC-ee at $M_Z$ one needs improvement to 0.01%, for example. We present an overview of the roads one may take to reach the required 0.01 % precision tag at the FCC-ee and of what the corresponding precision expectations would be for the FCC-ee$_{350}$, ILC$_{500}$, ILC$_{1000}$, and CLIC$_{3000}$ setups.
The international FCC study group published in 2019 a Conceptual Design Report for an electron-positron collider with a centre-of-mass energy from 90 to 365 GeV, a circumference of 98 km and beam currents of up to 1.4 A per beam. The high beam current of this collider create challenging requirements on the injection chain and all aspects of the linac need to be carefully reconsidered and revisited, including the injection time structure. The entire beam dynamics studies for the full linac, damping ring and transfer lines are major activities of the injector complex design. A key point is that any increase of positron production and capture efficiency reduces the cost and complexity of the driver linac, the heat and radiation load of the converter system, and increases the operational margin. The PSI Positron Production (P_cubed) project, currently in development at PSI, is the proposed proof-of-principle experiment for a potential FCC-ee positron source. Capture and transport of the secondary positron beam from the production target to the damping ring are a key challenge for FCC-ee, due to large emittance and energy spread. The use of novel matching and focusing methods has been studied, such as high temperature superconducting (HTS) solenoids, where recent simulations show considerably higher positron yield with respect to the state of the art. The experiment is to be hosted at SwissFEL at PSI, where a 6 GeV electron beam and a tungsten target can be used to generate the positron distribution. In this contribution we will give an overview of the status of the injector complex study and will introduce the P3 project both developed in the context of the CHART collaboration.
In this talk the current status and plans are presented on the LHeC accelerator concept, towards the new HEP strategy update in about 5 years time. We review the ERL and the IR including the possibility of a joint $eh/hh$ interaction region. The talk also covers FCC-he and refers to a separate presentation of the ERL demonstration facility PERLE. It is based on the comprehensive Conceptual Design Report update [1] and the recent work [2].
[1] P. Agostini et al. (LHeC Study Group), The Large Hadron-Electron Collider at the HL-LHC, J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
[2] K. D. J. Andre et al., An experiment for electron-hadron scattering at the LHC, Eur. Phys. J. C 82 (2022) 1, 40, e-Print: 2201.02436 [hep-ex].
The realisation of the LHeC and the FCC-he at CERN require the development of the energy recovering technique in multipass mode and for large current $\mathcal{O}(10)$ mA in the SRF cavities. For this purpose, a technology development facility, PERLE, is under design to be built at IJCLab Orsay, which has the key LHeC ERL parameters in terms of configuration, source, current, frequency and technical solutions, cryomodule, stacked magnets. In this talk we review the design and comment on the status of PERLE.
Electron-hadron colliders are the ultimate tool for high-precision quantum chromodynamics studies and for probing the internal structure of hadrons. The Hadron Electron Ring Accelerator HERA (DESY, Hamburg, Germany) was the first and up to now only electron-hadron collider ever operated (1991-2007). In 2019 the U.S. Department of Energy initiated the Electron-Ion Collider (EIC) project, the next electron-hadron collider currently under construction at BNL (Upton, NY) in partnership with JLab (Newport News, VA). The EIC builds on the infrastructure of the current Relativistic Heavy Ion Collider (RHIC) complex at BNL. The EIC will collide 5 to 18 GeV polarized electrons with 41 to 275 GeV polarized protons, polarized light ions with energies up to 166 GeV/u, and unpolarized heavy ions up to 110 GeV/u. The EIC is a high-luminosity collider designed to provide 10^34 cm−2 s−1 at 105 GeV center-of-mass energy collisions between electrons and protons. The project scope includes one colliding region with its detector but two colliding regions are feasible. This talk will give an overview of the EIC design, main technological challenges and timeline.
The Future Circular Collider (FCC) study was launched as a worldwide international collaboration hosted by CERN with the goal of pushing the field to the next energy frontier beyond the LHC. The mass of particles that could be directly produced is increased by almost an order of magnitude, and the subatomic distances to be studied are decreased by the same proportion. FCC covers two accelerators, namely an energy-frontier hadron collider (FCC-hh) and a highest luminosity, high-energy lepton collider (FCC-ee), sharing the same 100 km tunnel infrastructure. This talk focuses on the FCC-hh, summarising its key features such as accelerator design, performance reach, and underlying technologies. The proposed vision is based on the conceptual design report, which represents a milestone of this study but also covers more recent design activities.
As part of the Physics Beyond Collider study group, the CERN Gamma Factory is an innovative proposal to exploit the potential of CERN to accelerate at ultra-relativistic energies partially stripped ion with high intensity such that their low lying atomic levels be excited by state of art optical systems. This may enable a very broad range of new applications from atomic physics to particle physics, including their applicative counterparts thanks to the production of high-energy photon beams (up to 400 MeV) with unprecedented intensity (up to $10^{18}$ photons per second). A large variety of theoretical developments have reinforced the interest of the community in this project in the past two years as shown in the special issue of Annalen der Physik (https://onlinelibrary.wiley.com/toc/15213889/2022/534/3). Recent progress towards the realization of a proof of principle experiment at the CERN SPS will be shown.
The observations of the Advanced LIGO and Advanced Virgo gravitational-wave detectors have led so far to the confident identification of 90 signals, from the merger of compact binary systems constituted of black holes and neutron stars. These events have offered a new testing ground for General Relativity and better insights into the nuclear equation of state for neutron stars, as well as the discovery of a new population of black holes. For each detection, a thorough event validation procedure has been completed in order to carefully assess the impact of potential data quality issues, such as instrumental artefacts, on the analysis results. This has increased the confidence in the astrophysical origin of the observed signals, as well as in the accuracy of the estimated source parameters. In this presentation, we will describe the most relevant steps of the validation process, in the context of the last observing run (O3) of the Advanced gravitational-wave detectors. Moreover, these detectors are currently ongoing a phase of upgrades in preparation for the next joint observing run (O4), scheduled to begin in December 2022. The predicted improvement in sensitivity is expected to produce a higher rate of candidate events, which will constitute a new challenge for the validation procedures.
Sources of geophysical noise (such as wind, sea waves and earthquakes) or of anthropogenic noise (nearby activities, road traffic, etc.) impact ground-based gravitational-wave (GW) interferometric detectors, causing transient sensitivity worsening and gaps in data taking.
During the one year-long third Observing Run (O3: from April 01, 2019 to March 27, 2020), the Virgo Collaboration collected a large dataset, which has been used to study the response of the Advanced Virgo detector to a variety of environmental conditions. We correlated environmental parameters to global detector performance, such as observation range (the live distance up to which a given GW source could be detected), duty cycle and control losses (losses of the global working point, the instrument configuration needed to observe the cosmos). Where possible, we identified weaknesses in the detector that will be used to elaborate strategies in order to improve Virgo robustness against external disturbances for the next data taking period, O4, currently planned to start at the end of 2022. The lessons learned could also provide useful insights for the design of the next generation of ground-based interferometers.
The associated article has been posted to arXiv recently (https://arxiv.org/abs/2203.04014) and submitted to a journal.
The characteristics of the cosmic microwave background provide circumstantial evidence that the hot radiation-dominated epoch in the early universe was preceded by a period of inflationary expansion. Here, it will be shown how a measurement of the stochastic gravitational wave background can reveal the cosmic history and the physical conditions during inflation, subsequent pre- and reheating, and the beginning of the hot big bang era. This will be exemplified with a particularly well-motivated and predictive minimal extension of the Standard Model which is known to provide a complete model for particle physics -- up to the Planck scale, and for cosmology -- back to inflation.
In 2006, A. Cohen and S. Glashow presented for the first time the idea of Very Special Relativity (VSR), where they imagined to restrict space-time invariance to a subgroup of the full Lorentz group, usually the subgroup $SIM(2)$. The advantage of this theory is that, while it does not affect the classical prediction of Special Relativity, it can explain the existence of neutrino masses without the addition of new exotic particles or tiny twisted space dimensions, which until now have not been observed in experiments.
The addition of either $P$, $CP$, or $T$ invariance to $SIM(2)$ symmetry enlarges the entire symmetry group again to the whole Lorentz group. That implies the absence of VSR effects in theories where one of the above three discrete transformations is conserved.
Since we know thanks to Sakharov conditions that these discrete symmetries must be broken in cosmology, the effects of VSR in this framework become worthy of being studied. With our work, we managed to construct a $SIM(2)$-invariant version of linearized gravity, describing the dynamics of the space-time perturbation field $h_{\mu \nu}$. Such a theory may be used as a starting point for the study of VSR consequencies in the propagation of gravitational waves in a Lorentz breaking background.
In the end, our analysis will correspond to a massive graviton model. That could be of great interest due to the various recent applications that are being explored for massive gravity, from dark matter to cosmology, despite the strong boundaries we already have on the graviton mass.
Until now, massive gravity models were usually constructed as Lorentz invariant. Nevertheless, as in the case of Electromagnetism and the Proca Theory, there is no way of trivially preserving both Lorentz and Gauge invariance when giving mass to the graviton.
Giving up on the Gauge invariance directly leads to the appearance of three additional degrees of freedom (D.o.F.) respect to the ones of General Relativity (GR), which are responsible for different pathologies of these theories, like the vDVZ discontinuity and ghost modes (i.e. the Boulware-Deser ghost). Many of these problems have already been solved with the Vainshtein Mechanism and the fine-tuned dRGT action to avoid ghosts, making dRGT massive gravity a good candidate to solve the cosmological constant problem. Even so, dealing with cosmology brings up new problems and instabilities which have not already been solved.
Giving up on Lorentz invariance, that is what we considered in our work by implementing VSR, is the other viable possibility for massive gravity. Experience with VSR Electrodynamics and VSR massive Neutrinos tell us that VSR extensions avoid the introduction of ghosts in the spectrum: in fact, as we will see, gauge invariance of our formulation does not allow for new additional D.o.F. other than the usual two of the massless graviton, getting round most of the problems cited above, like the Boulware-Deser ghost. Nevertheless, these advantages come at the price of considering new non-local terms in the theory and assuming a preferred space-time null direction, represented by the lightlike four-vector $n^\mu$.
Finally, through the geodesic deviation equation, we have confronted some results for classic gravitational waves (GW) with the VSR ones: we see that the ratios between VSR effects and classical ones are proportional to $(m_g/E)^2$, $E$ being the energy of a graviton in the GW. For GW detectable by the interferometers LIGO and VIRGO this ratio is at most $10^{-20}$. However, for GW in the lower frequency range of future detectors, like LISA, the ratio increases signficantly to $ 10^{-10}$, that combined with the anisotropic nature of VSR phenomena may lead to observable effects.
Gravitational-wave (GW) cosmology provides a new way to measure the expansion history of the Universe, based on the fact that GWs are direct distance tracers. This property allows at the same time to test gravity at cosmological scales, since in presence of modifications of General Relativity the distance inferred from GWs is modified - a phenomenon known as ''modified GW propagation''. On the other hand, obtaining the redshift (whose knowledge is essential to test cosmology) is the challenge of GW cosmology. In absence of a direct electromagnetic counterpart to the GW event, the source goes under the name of ''dark siren'' and statistical techniques are used.
In this talk, I will present measurements of the Hubble parameter and bounds on modified GW propagation, obtained from the latest Gravitational Wave Transient Catalog 3 with new, independent, open-source codes implementing the statistical correlation between GW events and galaxy catalogues and information from the mass distribution of binary black holes.
I will discuss methodological aspects, relevant sources of systematics, the interplay with population studies, current challenges and possible ways forward.
I will finally present perspectives for the use of statistical dark siren techniques with third generation (3G) ground-based GW detectors, in particular the Einstein Telescope observatory.
In this talk, I will evaluate the potential for extremely high-precision astrometry of a small number of non-magnetic, photometrically stable hot white dwarfs (WD) located at $\sim$ kpc distances to access interesting sources in the gravitational-wave (GW) frequency band from 10 nHz to 1 $\mu$Hz. Previous astrometric studies have focused on the potential for less precise, large-scale astrometric surveys; the work I will discuss provides an alternative optimization approach to this problem. I will show that photometric jitter from starspots on WD of this type is bounded to be small enough to permit such an approach, and discuss possible noise arising from stellar reflex motion induced by orbiting objects. Interesting sources in this band are expected at characteristic strains around $h_c \sim 10^{-17} \times \left( \mu\text{Hz} / f_{\text{GW}} \right).$ I will outline the mission parameters needed to obtain the requisite angular sensitivity for a small population of such WD, $\Delta \theta \sim h_c$ after integrating for $T\sim 1/f_{\text{GW}}$, and show that a space-based stellar interferometer with few-meter-scale collecting dishes and baselines of $\mathcal{O}(100 \text{km})$ is sufficient to achieve the target strain over at least half the band of interest. This collector size is broadly in line with the collectors proposed for some formation-flown, space-based astrometer or optical synthetic-aperature imaging array concepts; the proposed baseline is however somewhat larger than the km-scale baselines discussed for those concepts. The ability to probe GWs with such a mission bolsters its science case.
Leptoquarks are ubiquitous in several extensions of the Standard Model and seem to be able to accommodate the universality-violation-driven $B$-meson-decay anomalies and the $(g-2)_\mu$ discrepancy interpreted as deviations from the Standard Model predictions. In addition, the search for lepton-flavour violation in the charged sector is, at present, a major research program that could also be facilitated by the dynamics generated by leptoquarks. In this work, we considered a rather wide framework of both scalar and vector leptoquarks as the generators of lepton-flavour violation in processes involving the tau lepton. We singled out its couplings to leptoquarks, thus breaking universality in the lepton sector, and we integrated out leptoquarks at tree level, generating the corresponding dimension-6 operators of the Standard Model Effective Field Theory. In the previous work of $\textit{T. Husek, K. Monsálvez-Pozo and J. Portolés}$ DOI: 10.1007/JHEP01(2021)059, we obtained model-independent bounds on the Wilson coefficients of those operators contributing to lepton-flavour-violating hadron tau decays and $\ell$--$\tau$ conversion in nuclei, with $\ell=e,\mu$. Hence, here we used those results to translate the bounds into the couplings of leptoquarks to the Standard Model fermions.
We study the impact of triple-leptoquark interactions on matter stability for two specific proton decay topologies that arise at the tree- and one-loop level if and when they coexist. We demonstrate that the one-loop level topology is much more relevant than the tree-level one when it comes to the proton decay signatures despite the usual loop-suppression factor. We subsequently present detailed analysis of the triple-leptoquark interaction effects on the proton stability within one representative scenario to support our claim, where the scenario in question simultaneously features a tree-level topology that yields three-body proton decay p→e+e+e− and a one-loop level topology that induces two-body proton decays p→π0e+ and p→π+ν¯. We also provide a comprehensive list of the leading-order proton decay channels for all non-trivial cubic and quartic contractions involving three scalar leptoquark multiplets that generate triple-leptoquark interactions of our interest, where in the latter case one of the scalar multiplets is the Standard Model Higgs doublet.
We examine new aspects of leptoquark (LQ) phenomenology using effective field theory (EFT). We construct a complete set of leading effective operators involving SU(2) singlets scalar LQ and the Standard Model (SM) fields up to dimension six. We show that, while the renormalizable LQ-lepton-quark interaction Lagrangian can address the persistent hints for physics beyond the SM in the B-decays and in the measured anomalous magnetic moment of the muon, the LQ higher dimensional effective operators may lead to new interesting effects associated with lepton number violation. These include the generation of one-loop and two-loops sub-eV Majorana neutrino masses, mediation of neutrinoless double-β decay and novel LQ collider signals. For the latter, we focus on third generation LQ ($\phi_3$) in a framework with an approximate $Z_3$ generation symmetry and show that one class of the dimension five LQ operators may give rise to a striking asymmetric same-charge $\phi_3 \phi_3$ pair-production signal, which leads to low background same-sign di-leptons signals at the LHC. For example, if the LQ mass is around 1 TeV and the new physics scale is Λ ∼ 5 TeV, then we expect about 5000 positively charged $\tau^+ \tau^+$ events via $pp \to \phi_3 \phi_3 \to \tau^+ \tau^+ + 2 \cdot j_b$ ($j_b = b$-jet), about 500 negatively charged $\tau^- \tau^-$ events with a signature $pp \to \phi_3 \phi_3 \to \tau^- \tau^- + 4 \cdot j + 2 \cdot j_b$ ($j=$ light jet) and about 50 positively charged $\ell^+ \ell^+$ events via $pp \to \ell^+ \ell^+ + 2 \cdot j_b + MET$ ($\ell = e,\mu,\tau$), at the 13 TeV LHC with an integrated luminosity of 300 fb$^{−1}$. It is interesting to note that, in the LQ EFT framework, the expected same-sign lepton signals have a rate which is several times larger than the QCD LQ-mediated opposite-sign leptons signals, $gg, q \bar q \to \phi_3 \phi_3^\star \to \ell^+ \ell^- +X$.
Multi-lepton signals provide a relatively clean and rich testing ground for new physics (NP) at the LHC and, in particular, for searching for lepton flavor universality violation (LFUV) effects mediated by new heavy states of an underlying TeV-scale NP. The potential sensitivity of 3rd generation fermions (the top-quark in particular) to TeV-scale NP along with the persistent anomalies in B-decays, the recently confirmed muon g-2 anomaly as well as hints reported recently by ATLAS and CMS of unequal di-muons versus di-electrons production, have led us to explore effects of higher-dimensional $(qq)(\ell \ell)$ 4-Fermi operators involving 3rd generation quarks and muons/electrons, on multi-leptons + jets production at the LHC. I will focused on the "tail –effects" of both flavor-changing $(q_3 q_{1,2})(\ell \ell)$ and flavor-diagonal $(q_3 q_3)(\ell \ell)$ scalar, vector and tensor contact interactions, that are generated by tree-level exchanges of multi-TeV heavy states, and discuss the sensitivity of the LHC and a future HL-LHC to the scales of these 4-Fermi terms, $\Lambda(q_3 q \ell \ell)$, via these $pp \to$ multi-leptons + jets channels. In particular, I will show that by applying a sufficiently high invariant mass selection on the di-leptons from the $qq\ell\ell$ contact interaction and additional specific jet-selections designed to minimize the SM background, one can obtain a significantly better sensitivity than the current sub-TeV bounds on these type of NP.
The ‘4321’ gauge models are promising extensions of the SM that give rise to the $𝑈_1$ vector leptoquark solution to the 𝐵-physics anomalies. Both the gauge and fermion sectors of these UV-constructions lead to a rich phenomenology currently accessible by the Large Hadron Collider. In this talk we describe some of the main LHC signatures and extract exclusion limits using run-II data. In addition, we also discuss a 4321 extension with a dark sector leading to a Majorana dark matter candidate and a coloured partner producing new signatures at the LHC.
Experimental hints for lepton flavor universality violation in beauty-quark decay both in neutral- and charged-current transitions require an extension of the Standard Model for which scalar leptoquarks (LQs) are the prime candidates. Besides, these same LQs can resolve the long-standing tension in the muon and the recently reported deviation in the electron $g-2$ anomalies. These tantalizing flavor anomalies have discrepancies in the range of $2.5\sigma-4.2\sigma$, indicating that the Standard Model of particle physics may finally be cracking. In this Letter, we propose a resolution to all these anomalies within a unified framework that sheds light on the origin of neutrino mass. In this model, the LQs that address flavor anomalies run through the loops and generate neutrino mass at the two-loop order while satisfying all constraints from collider searches, including those from flavor physics.
No stone can be left unturned in the search for new physics beyond the standard model (BSM). Since no indication of new physics was found yet, and the resources in hand are limited, we must devise novel avenues for discovery. We propose a Data-Directed Paradigm (DDP), whose principal objective is to direct dedicated analysis efforts towards regions of data which hold the highest potential for discoveries leading to BSM physics.
The DDP is a different search paradigm, in complete contrast but complementary to the currently dominant theory-driven blind analysis search paradigm. It could reach discoveries that are currently blocked by the waste of resources involved in the blind analysis dogma. After investing hundreds of persons-years, impressive bounds on BSM scenarios have been set. However, this paradigm also limited the number of searches conducted, leaving large potential of the data unexplored. One representative example is that of the search for di-lepton resonances, where searches targeting exclusive regions of the data (di-lepton+X) are hardly conducted. Focusing on the Data, the DDP allows identifying rapidly whether the data in a given region exhibit significant deviations from a well-established property of the Standard Model (SM). Thus, ideally, an unlimited number of final states can be tested, expanding considerably our discovery reach.
Based on the work presented in [1] and [2], we propose developing the DDP for two SM properties. The first is the fact that in absence of resonances, most invariant mass distribution are smoothly falling. Along the di-lepton example, we propose identifying which of the many di-lepton+X selections is more likely to hide a resonance. The second property is the flavour symmetry of the SM, the fact that, in absence of BSM physics, the LHC data should be approximately symmetric to the replacement of prompt electrons with prompt muons. Once consolidated, we will conduct the two DDP searches and explore regions of the ATLAS data that otherwise might remain unexplored.
The DDP search paradigms and it’s suggested realizations will be discussed.
[1] S. Volkovich, F. De Vito Halevy, S. Bressler, “A data-directed paradigm for BSM searches: the bump-hunting example”, Eur.Phys.J.C 82 (2022) 3, 265
[2] M. Birman, B. Nachman, R. Sebbah, G. Sela, O. Turetz, S. Bressler, “Data-Directed Search for New Physics based on Symmetries of the SM”, [arXiv:2203.07529], submitted for publication.
We present an overview of searches for new physics with top and bottom quarks in the final state, using proton-proton collision data collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV. The results cover non-SUSY based extensions of the SM, including heavy gauge bosons or excited third generation quarks. Decay channels to vector-like top partner quarks, such as T', are also considered. We explore the use of jet substructure techniques to reconstruct highly boosted objects in events, enhancing the sensitivity of these searches.
The Dark Higgs model is an extension of the Standard Model that describes the phenomenology of dark matter while respecting the SM gauge symmetries. This new approach opens regions of parameter space that are less covered by searches optimized for simpler models of dark matter. In this talk, we present such searches from CMS, focusing on the recent results obtained using the full Run-II dataset collected at the LHC.
Searches in CMS for dark matter in final states with invisible particles recoiling against visible states are presented. Various topologies and kinematic variables are explored, including jet substructure as a means of tagging heavy bosons. In this talk, we focus on the recent results obtained using the full Run-II dataset collected at the LHC.
The LHCb detector at the LHC offers unique coverage of forward rapidities. The detector also has a flexible trigger that enables low-mass states to be recorded with high efficiency, and a precision vertex detector that enables excellent separation of primary interactions from secondary decays. This allows LHCb to make significant (and world-leading) contributions in these regions of phase space in the search for long-lived particles that would be predicted by dark sectors which accommodate dark matter candidates. A selection of results from searches of heavy neutral leptons, dark photons, hidden-sector particles, and dark matter candidates produced from heavy-flavour decays among others will be presented, alongside the potential for future measurements in some of these final states.
The presence of a non-baryonic Dark Matter (DM) component in the Universe is inferred from the observation of its gravitational interaction. If Dark Matter interacts weakly with the Standard Model (SM) it could be produced at the LHC. The ATLAS experiment has developed a broad search program for DM candidates, including resonance searches for the mediator which would couple DM to the SM, searches with large missing transverse momentum produced in association with other particles (light and heavy quarks, photons, Z and H bosons, as well as additional heavy scalar particles) called mono-X searches and searches where the Higgs boson provides a portal to Dark Matter, leading to invisible Higgs decays. The results of recent searches on 13 TeV pp data, their interplay and interpretation will be presented.
The discovery of dark matter is one of the challenges of high-energy physics in the collider era. Many Beyond-Standard Model theories predict dark matter candidates associated with the production of a single top-quark in the final state, the so-called mono-top. A search for events with one top quark and missing transverse energy in the final state is presented. This analysis explores the fully hadronic decay of the top-quark, requiring large missing transverse energy and a boosted large-radius jet in the final state. A Boosted-Decision Tree is used to discriminate the background (mostly coming from top pair production and vector boson production in association with jets) from mono-top signal events. Two alternative interpretations of the obtained results were done, namely the production of a generic dark matter particle and the single production of a vector-like T quark. The analysis makes use of data collected with the ATLAS experiment at $\sqrt{s}$ = 13 TeV during LHC Run-2 (2015-2018) and corresponding to an integrated luminosity of 139 fb-1. This analysis is expected to improve the existing limits on the mass of the dark matter candidate from the considered model. New exclusion limit contours in the model parameter space are also foreseen.
Belle has unique reach for a broad class of models that postulate the existence of dark matter particles with MeV—GeV masses. This talk presents recent world-leading physics results from Belle II searches for dark Higgstrahlung and invisible $Z^{\prime}$ decays; as well as the near-term prospects for other dark-sector searches.
The Belle II experiment is taking data at the asymmetric SuperKEKB collider, which operates at the Y(4S) resonance. The vertex detector is composed of an inner two-layer pixel detector (PXD) and an outer four-layer double-sided strip detector (SVD). The SVD-standalone tracking allows the reconstruction and identification, through dE/dx, of low transverse momentum tracks. The SVD information is also crucial to extrapolate the tracks to the PXD layers, for efficient online PXD-data reduction.
A deep knowledge of the system has been gained since the start of operations in 2019 by assessing the high-quality and stable reconstruction performance of the detector. Very high hit efficiency, and large signal-to-noise are monitored via online data-quality plots. The good cluster-position resolution is estimated using the unbiased residual with respect to the track, and it is in reasonable agreement with the expectations.
Currently the SVD average occupancy, in its most exposed part, is still < 0.5%, which is well below the estimated limit for acceptable tracking performance. With higher machine backgrounds expected as the luminosity increases, the excellent hit-time information will be exploited for background rejection, improving the tracking performance. The front-end chip (APV25) is operated in “multi-peak” mode, which reads six samples. To reduce background occupancy, trigger dead-time and data size, a 3/6-mixed acquisition mode based on the timing precision of the trigger has been successfully tested in physics runs.
Finally, the SVD dose is estimated by the correlation of the SVD occupancy with the dose measured by the diamonds of the radiation-monitoring and beam-abort system. First radiation damage effects are measured on the sensor current and strip noise, although they are not affecting the performance.
Belle II is a new-generation B-factory experiment operating at the beam intensity frontier, SuperKEKB accelerator, dedicated to exploring new physics beyond the standard model of elementary particles in the flavor sector. Belle II started data-taking in April 2018, using a synchronous data acquisition (DAQ) system based on a pipelined trigger flow control. Belle II DAQ system is designed to handle 30 kHz trigger rate, under the assumption of a raw event size 1 MB. Because the event size and rate could be larger than the designed value depending on the background condition, and the difficult maintainability of the current readout system during the Belle II entire operation period is expected, we decided to upgrade the Belle II DAQ readout system with state-of-art technology. A PCI-express based new-generation of readout board (PCIe40), which was originally developed for the upgrade of LHCb and ALICE experiments, has been used for the upgrade of Belle II DAQ system. PCIe40 is able to connect to a maximum of 48 frontend electronics through multi-gigabit serial links. PCI-express hard IP-based direct memory access architecture, the newly designed timing and trigger distribution system and slow control system made the Belle II readout setup as a compact system. Three out of 7 sub-detectors of Belle II experiment has been operated with the upgraded DAQ system. In this submission we present the development of firmware and software for the new Belle II DAQ system, and its operation performance during physics data-taking.
The Belle II experiment at the SuperKEKB e+e- collider has started data taking in 2018 with the perspective of collecting 50ab-1 during the next several years. The detector is working well with very good performance, but the first years of running are showing novel challenges and indicate the need for an accelerator consolidation and upgrade to reach the target luminosity of 6E35 cm-2s-1, which might require a long shutdown in the timeframe of 2026-2027. To fully exploit physics opportunities, and to ensure reliable and efficient detector operations, Belle II has started to define a detector upgrade program to make the various sub-detectors more robust and performant even in the presence of high backgrounds, facilitating the SuperKEKB running at high luminosity.
This upgrade program will possibly include the replacement of some readout electronics, the upgrade of some detector elements, and may also involve the substitution of entire detector sub-systems such as the vertex detector. The process has started with the submission of Expressions Of Interest that are being reviewed internally and will proceed towards the preparation of a Conceptual Design Report currently planned for the beginning of 2023. This paper will cover the full range of proposed upgrade ideas and their development plans.
The addition of a Forward Calorimeter (FoCal) to the ALICE experiment is proposed for LHC Run 4 to provide unique constraints on the low-x gluon structure of protons and nuclei via forward measurements of direct photons. A new high-resolution electromagnetic Si-W calorimeter using both Si-pad and Si-pixel layers is being developed to discriminate single photons from pairs of photons originating from $\pi^0$ decays. A conventional sampling hadron calorimeter is foreseen for jet measurements and the isolation of direct photons. In this presentation, we will report on results from test beam campaigns in 2019 and 2021 at DESY and CERN with Si-pad and pixel modules, a first prototype for the hadronic calorimeter, and a full-pixel calorimetry prototype based on ALPIDE sensors.
After the successful installation and first operation of the upgraded Inner Tracking System (ITS2), which consists of about 10 m2 of monolithic silicon pixel sensors, ALICE is pioneering the usage of bent, wafer-scale pixel sensors for the ITS3 for Run 4. Sensors larger than typical reticle sizes can be produced using the technique of stitching. At thicknesses of about 30 µm, the silicon is flexible enough to be bent to radii of the order of 1 cm. By cooling such sensors with a forced air flow, it becomes possible to construct truly cylindrical layers which consist practically only of the silicon sensors. The reduction of the material budget and the improved pointing resolution will allow new measurements, in particular of heavy-flavour decays and electromagnetic probes. In this presentation, we will report on the sensor developments, the performance of bent sensors in test beams, and the mechanical studies on truly cylindrical layers.
ALICE 3 is proposed as the next-generation experiment to address unresolved questions about the quark-gluon plasma by precise measurements of heavy-flavour probes as well as electromagnetic radiation in heavy-ion collisions in LHC Runs 5 and 6. In order to achieve the best possible pointing resolution a concept for the installation of a high-resolution vertex tracker in the beampipe is being developed. It is surrounded by a silicon-pixel tracker covering roughly 8 units of pseudorapidity. To achieve the required particle identification performance, a combination of a time-of-flight system and a Ring-Imaging Cherenkov detector is foreseen. Further detectors, such as an electromagnetic calorimeter, a muon identifier, and a dedicated forward detector for ultra-soft photons, are being studied. In this presentation, we will explain the detector concept and its physics reach as well as discuss the R&D challenges.
In this contribution the nuclear modification factor ($R_\mathrm{AA}$) of prompt charm hadrons and heavy-flavour hadrons decaying to leptons measured in Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02$ TeV by the ALICE Collaboration are presented. The measurement of heavy-flavour leptons in Xe-Xe collisions is also discussed. Heavy quarks are a very suitable probe to investigate the quark--gluon plasma (QGP) produced in heavy-ion collisions, since they are mainly produced in hard-scattering processes and hence in shorter timescales compared to the QGP. Measurements of charm-hadron production in nucleus--nucleus collisions are therefore useful to study the properties of the in-medium charm-quark energy loss via the comparison with theoretical models. Moreover, the comparison of different colliding systems provide insights in the dependency on the collision geometry.
Models describing the heavy-flavour transport and energy loss in an hydrodynamically expanding QGP require also a precise modelling of the in-medium hadronisation of heavy quarks, which is investigated via the measurement of prompt $\mathrm{D_s^+}$ mesons and $\Lambda_\mathrm{c}^{+}$ baryons.
In addition, the measurement of the azimuthal anisotropy of strange and non-strange D mesons is discussed. The second harmonic coefficient provides information about the degree of thermalisation of charm quarks in the medium, while the third one relates to its sensitivity to event-by-event fluctuations in the initial stage of the collision.
A thorough systematic comparison of experimental measurements with phenomenological model calculations will be performed in order to disentangle different model contributions and provide important constraints to the charm-quark diffusion coefficient $D_s$ in the QGP.
We report the first measurement of the azimuthal angular correlation between jets and $D^0$ mesons in pp and PbPb collisions. The measurement is performed using jets with $p_\mathrm{T}> 60$ GeV and $D^0$ mesons with $p_\mathrm{T} > 4$ GeV. The azimuthal angle difference between jets and $D^0$ mesons($0<\Delta\phi<\pi$) is sensitive to medium-induced charm diffusion, charm quark energy loss, and possible rare large-angle scattering between charm and the quasi-particles in QGP. We also report the radial profile of the charm quark with respect to the jet axis measured differentially in centrality and $D^0 p_\mathrm{T}$. This analysis is performed with high-statistics Run 2 data collected by the CMS detector.
Heavy quarks are primarily produced via initial hard scatterings, and thus carry information about the early stages of the Quark-Gluon Plasma (QGP). Measurements of the azimuthal anisotropy of the final-state heavy flavor hadrons provide information about the initial collision geometry, its fluctuation, and more importantly, the mass dependence of energy loss in QGP. Due to the larger bottom quark mass as compared to the charm quark mass, separate measurements of charm and bottom hadron azimuthal anisotropy can shed new light on understanding the dependence of the heavy quark and medium interaction. Because of the high branching ratio and large $D^0$ mass, measurements of $D^0$ meson coming from $B$ hadron decay (nonprompt $D^0$) can cover a broad kinematic range and be a good proxy of the parent bottom hadrons results. In this talk we report both on the prompt $D^0$ and the first nonprompt $D^0$ measurements of the azimuthal anisotropy elliptic ($v_2$) and triangular ($v_3$) coefficients of nonprompt $D^0$ in PbPb collisions at $\sqrt{s_{_{\mathrm{NN}}}} =$ 5.02 TeV. The measurements are performed as functions of transverse momentum $p_\mathrm{T}$, in three centrality classes, from central to midcentral collisions.. Compared to the prompt $D^0$ results, the nonprompt $D^0$ $v_2$ flow coefficients are systematically lower but have a similar dependence on $p_\mathrm{T}$ and centrality. A non-zero $v_3$ coefficient of the nonprompt $D^0$ is observed. The obtained results are compared with theoretical predictions. The comparison could provide new constraints on the theoretical description of the interaction between heavy quarks and the medium.
In this contribution, the final measurements of the centrality dependence of $R_{\rm AA}$ of non-prompt $\mathrm{D}^0$ and electrons from beauty hadron decays in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV will be presented. These measurements provide important constraints to the in-medium mass-dependent energy loss and hadronization of the beauty quark. The integrated non-prompt $\mathrm{D}^0$ $R_{\rm AA}$ will be presented for the first time and will be compared with the prompt $\mathrm{D}^0$ one. This comparison will shed light on possible different shadowing effects between charm and beauty quarks. In addition, the first measurements of non-prompt $\mathrm{D}_{s}$ production in central and semi-central Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV will be discussed. The non-prompt $\mathrm{D}_{s}$ measurements provide additional information on the production and hadronization of $\mathrm{B}_{s}$ mesons. Finally, the first measurement of non-prompt D-mesons elliptic flow in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV will also be discussed. It will help to further investigate the degree of thermalization of beauty quark in the hot and dense QCD medium.
Measurements of jet constituent distributions for light- and heavy flavor jets are used successfully for experimental QCD studies with high energy pp collisions at the LHC. These studies are now extended to explore the flavor dependence of the jet quenching phenomenon. The jet quenching, one of the signatures of the quark-gluon plasma, is well established through experimental measurements at RHIC and LHC. However, the details of the expected dependence of jet-medium interactions on the flavor of the parton initiating the shower are not yet settled. This talk presents the first b jet shapes measurements from 5 TeV PbPb and pp collisions collected by the CMS. Comparisons made with jet shapes of inclusive jets, produced predominantly by light quarks and gluons, allow experimental observations of a “dead cone” effect in suppressing in-jet transverse momenta of constituents at small radial distance R from the jet axis. A similar comparison for large distances provides insights on the role of parton mass in the energy loss and possible mass-dependence of medium response.
Beauty quark is one of the best probes of the Quark Gluon Plasma. Its large mass allows to probe the QGP transport properties in the heavy flavor sector through energy loss and diffusion. However, the hadronization of beauty is not as well understood as that of charm due to the smaller cross-section. Clarifying the hadronization mechanism is crucial for understanding the transport properties in QGP extracted from beauty hadron (and their decay product) spectra. In this talk, we will present new results on nuclear modification factors of $B^0_s$ and $B^+$ mesons and their yield ratios in pp and PbPb collisions at 5.02 TeV using the data recorded with the CMS detector in 2017 and 2018. The accuracy is significantly improved with respect to the previously published results. The reported B mesons nuclear modification factors over an extended transverse momentum range will provide important information about the diffusion of beauty quark and the flavor dependence of in-medium energy loss. The $B^0_s/ B^+$ yield ratio in pp and PbPb can shed new light on the mechanisms of beauty recombination in vacuum and in medium. It will also provide an important input to understand the hadronization mechanism of beauty quark, testing the QCD factorization theorem at the LHC energy.
We have investigated the many-body equations of $D$ and $\bar{B}$ mesons in a thermal medium by applying an effective field theory based on chiral and heavy-quark spin symmetries. Exploiting these symmetries within the kinetic theory, we have derived an off-shell Fokker-Planck equation which incorporates information of the full spectral function of these states.
I will present the latest results on heavy-flavor transport coefficients below the chiral restoration temperature. I will also detail the origin of the in-medium reactions which contribute to the heavy-meson thermal width and energy loss, including the soft-pion emission (Bremsstrahlung) process.
This talk will cover the latest searches for non resonant double Higgs boson production at CMS and interpretations in terms of the Higgs self-coupling. The talk will include the latest combination(s) of HH search channels.
The measurement of pair-production of Higgs bosons is one of the key goals of the LHC. Also, beyond the standard model theories involving extra spatial dimensions predict resonances with large branching fractions in a pair of Higgs bosons with negligible branching fractions to light fermions. We present an overview of searches for resonant and nonresonant Higgs boson pair production at high transverse momentum, using proton-proton collision data collected with the CMS detector at the CERN LHC. These results use novel analysis techniques to identify and reconstruct highly boosted final states that are created in these topologies.
In the Standard Model, the ground state of the Higgs field is not found at zero but instead corresponds to one of the degenerate solutions minimising the Higgs potential. In turn, this spontaneous electroweak symmetry breaking provides a mechanism for the mass generation of nearly all fundamental particles. The Standard Model makes a definite prediction for the Higgs boson self-coupling and thereby the shape of the Higgs potential. Experimentally, both can be probed through the production of Higgs boson pairs (HH), a rare process that presently receives a lot of attention at the LHC. In this talk, the latest HH searches by the ATLAS experiment are reported, with emphasis on the results obtained with the full LHC Run 2 dataset at 13 TeV. In the case of non-resonant HH searches, results are interpreted both in terms of sensitivity to the Standard Model and as limits on the Higgs boson self-coupling.
The most precise measurements of single and double Higgs boson production cross sections are obtained from a combination of measurements performed in different Higgs boson production and decay channels. While double Higgs production can be used to directly constrain the Higgs boson self-coupling, this parameter can be also constrained by exploiting higher-order electroweak corrections to single Higgs boson production. A combined measurement of both results yields the overall highest precision, and reduces model dependence by allowing for the simultaneous determination of the single Higgs boson couplings. Results for this combined measurement are presented based on pp collision data collected at a centre-of-mass energy of 13 TeV with the ATLAS detector.
Recent HL-LHC studies that were performed by CMS within Snowmass activities are presented. Updates cover different physics topics from Higgs and SM processes.
The large dataset of about 3 $\rm ab^{-1}$ that will be collected at the High Luminosity LHC (HL-LHC) will be used to measure Higgs boson processes in detail. Studies based on current analyses have been carried out to understand the expected precision and limitations of these measurements. The large dataset will also allow for better sensitivity to di-Higgs processes and the Higgs boson self coupling. This talk will present the prospects for Higgs and di-Higgs results with the ATLAS detector at the HL-LHC.
We study the Higgs boson decays h -> c cbar, b bbar, b sbar, photon photon
and gluon gluon in the Minimal Supersymmetric Standard Model (MSSM) with
general quark flavor violation (QFV), identifying the h with the Higgs boson
with a mass of 125 GeV. We compute the widths of the h decays to c cbar,
b bbar, b sbar (s bbar) at full one-loop level in the MSSM with QFV.
For the h decays to photon photon and gluon gluon we compute the widths
at NLO QCD level. We perform a systematic MSSM parameter scan respecting all
the relevant constraints, i.e. theoretical constraints from vacuum stability
conditions and experimental constraints, such as those from K- and B-meson
data and electroweak precision data, as well as limits on Supersymmetric
(SUSY) particle masses and the 125 GeV Higgs boson data from LHC experiments.
From the parameter scan, we find the followings:
(1) DEV(h -> c cbar) and DEV(h -> b bbar) can be very large simultaneously:
DEV(h -> c cbar) can be as large as about +/-60% and
DEV(h -> b bbar) can be as large as about +/-20%.
Here DEV(h -> X Y) is the deviation of the decay width Gamma(h -> X Y)
in the MSSM from the SM prediction:
DEV(h -> X Y) = Gamma(h -> X Y)_MSSM / Gamma(h -> X Y)_SM - 1.
(2) The QFV decay branching ratio BR(h -> b sbar / bbar s) can be as
large as about 0.2% in the MSSM. It is almost zero in the SM. The sensitivity
of ILC(250 + 500 + 1000) to this decay BR could be about 0.1% at 4 sigma signal
significance.
(3) DEV(h -> photon photon) and DEV(h -> gluon gluon) can be large
simultaneously: DEV(h -> photon photon) can be as large as about + 4% and
DEV(h -> gluon gluon) can be as large as about -15%.
(4) There is a very strong correlation between DEV(h -> photon photon)
and DEV(h -> gluon gluon). This correlation is due to the fact that the
stop-loop (stop-scharm mixture loop) contributions dominate the two DEVs.
(5) The deviation of the width ratio Gamma(h -> photon photon)/Gamma(h ->
gluon gluon) in the MSSM from the SM value can be as large as about +20%.
(6) All of these large deviations in the h decays are due to large
scharm-stop mixing and large stop/scharm involved trilinear couplings
T_U23, T_U32, T_U33 and large sstrange-sbottom mixing and large
sstrange/sbottom involved trilinear couplings T_D23, T_D32, T_D33.
(7) Future lepton colliders such as ILC, CLIC, CEPC and FCC-ee can
observe such large deviations from SM at high signal significance.
(8) In case the deviation pattern shown here is really observed at
the lepton colliders, then this would strongly suggest the discovery
of QFV SUSY (MSSM with QFV).
This work is the update of the papers shown below and contains many new findings.
Phys. Rev. D 91 (2015) 015007 [arXiv:1411.2840 [hep-ph]]
JHEP 1606 (2016) 143 [arXiv:1604.02366 [hep-ph]]
IJMP A34 (2019) 1950120 [arXiv:1812.08010 [hep-ph]]
PoS(EPS-HEP2021)594, 2021 [arXiv:2111.02713 [hep-ph]].
In the absence of direct observations of physics beyond the Standard Model (BSM) at the LHC, the interpretation of Standard Model measurements in the framework of an Effective Field Theory (EFT) represents the most powerful tool to identify BSM phenomena in tiny deviations from the measurement from the SM predictions, to interpret them in term of generic new interactions, or to place model-independent constraints on new physics scenarios. This talk presents various EFT interpretations of individual and combined measurements in the Higgs sector by the ATLAS experiment.
The Deep Underground Neutrino Experiment (DUNE), a next-generation long-baseline neutrino oscillation experiment, is a powerful tool to perform low energy physics searches. DUNE will be uniquely sensitive to the electron-neutrino-flavour component of the burst of neutrinos expected from the next Galactic core-collapse supernova, and also capable of detecting solar neutrinos. DUNE will have four modules of 70-kton liquid argon mass in total, placed 1.5 km underground at the Sanford Underground Research Facility in the USA. These modules are being designed exploiting different liquid argon time projection chamber technologies and based on the physics requirements that take into account the particularities of the low energy physics searches.
Supernova (SN) explosions are the most powerful cosmic factories of all-flavors, MeV-scale, neutrinos. The presence of a sharp time structure during a first emission phase, the so-called neutronization burst in the electron neutrino flavor time distribution, makes this channel a very powerful one. Large liquid argon underground detectors, like the future Deep Underground Neutrino Experiment (DUNE), will provide precision measurements of the time dependence of the electron neutrino fluxes. In this contribution, I derive a new neutrino mass sensitivity attainable at the future DUNE far detector, obtained by measuring the time-of-flight delay in the SN neutrino signal from a future SN collapse in our galactic neighborhood. Comparison of sensitivities achieved from the two neutrino mass orderings is discussed, as well as the effects due to propagation in the Earth matter.
The ProtoDUNE single phase detector (ProtoDUNE-SP) is a prototype liquid argon time projection chamber (LArTPC) for the first far detector module of the Deep Underground Neutrino Experiment (DUNE). ProtoDUNE-SP is installed at the CERN Neutrino Platform. Between October 10 and November 11, 2018, ProtoDUNE-SP recorded approximately 4 million events in a beam that delivers charged pions, kaons, protons, muons and electrons with momenta in the range 0.3 GeV/c to 7 GeV/c. After the beam runs ended, ProtoDUNE-SP continued to collect cosmic ray and calibration data until July, 2020. In this talk, we will review the results from analyzing the beam and cosmic ray data, including detector calibration, hadron-argon cross section measurements and seasonal variation of cosmic ray muon rate.
The Deep Underground Neutrino Experiment (DUNE) is part of the next generation of neutrino oscillation experiments that seek to definitively answer key questions in the field. It will utilize four 17-kt modules of Liquid Argon Time Projection Chambers (LArTPCs) enabling mm spatial resolutions for unprecedented sensitivity to neutrino oscillation paramters as well as for studies related to proton decay and supernova neutrinos. For this purpose, a newly proposed Vertical Drift (VD) configuration is being planned for the second DUNE module, in contrast to a Horizontal Drift (HD) configuration for the first module. The VD detector involves a suspended cathode dividing the TPC into two drift volumes oriented vertically above and below the cathode and is situated in an electric field of 500 V/cm. Unlike in the HD design where a multi-wire plane readout was employed, the anodes here consist of a grid of double-sided perforated PCBs. As electrons pass through the perforations, charge is induced and collected at parallel strips etched on different layers of the PCBs and oriented in multiple configurations for each layer. As part of prototyping designs for such a detector, a coldbox demonstrator housed in the NP04 platform at CERN is collecting cosmic data. The prototypes will seek to ensure favorable readout conditions as well as test different designs for the PCBs and strip orientations. In parallel, simulation studies are underway for the Far Detector module to assess various performance metrics related to selection and reconstruction efficiency. In this talk, I shall provide an overview of these efforts with a emphasis on the analysis of cosmic data from the coldbox demonstrators and its comparison with the simulation as well as the development of a deep learning-based neutrino flavor tagger in order to maximize sensitivity towards the oscillation measurements and help DUNE achieve its primary physics goals.
Neutrino oscillations in matter offer a novel path to investigate new physics. The most recent data from the two long-baseline accelerator experiments, NO$\nu$A and T2K, show discrepancy in the standard 3-flavor scenario. Along the same line of discussion, we intend to explore the next generation of long-baseline experiments: T2HK and DUNE. We investigate the sensitivities of relevant NSI couplings ($|\epsilon_{e \mu}|$, $|\epsilon_{e \tau}|$) and the corresponding CP-phases ($\phi_{e \mu}$ and $\phi_{e \tau}$). While both the experiments are sensitive to non-standard interactions (NSI) of the flavor changing type arising from $e-\mu$ and $e-\tau$ sectors, we show that DUNE is more sensitive to these NSI parameters when compared to that of T2HK. At the same time, we aim to explore the impact of non-standard neutrino interaction on the sensitivity of standard CP-phase $\delta_{CP}$ and atmospheric mixing angle $\theta_{23}$ in the normal as well as inverted hierarchies. Our analysis also exhibits the difference in probabilities for both the experiments with inclusion of NSI.
The experimental observation of the phenomena of neutrino oscillations was the first clear hint of physics beyond the Standard Model (SM). The SM needs an extension to incorporate the neutrino masses and mixing often called as beyond SM (BSM). The models describing BSM physics usually comes with some additional unknown couplings of neutrinos termed as Non Standard Interactions (NSIs) [1]. The idea of NSI was initially proposed by Wolfenstein [2], where he explored how non standard coupling of neutrinos with a vector field can give rise to matter effect in neutrino oscillations. Furthermore, there is also an intriguing prospect of neutrinos coupling with a scalar field, called scalar NSI [3, 4]. The effect of this type of scalar NSI appears as a medium dependent correction to the neutrino masses, instead of appearing as a matter potential. Hence scalar NSI may offer unique phenomenology in neutrino oscillations.
In this work, we have performed a synergy study of the effects of scalar NSI at various proposed Long Baseline (LBL) Experiments, viz. DUNE [5], T2HK [6] and T2HKK [7]. As the effect of scalar NSI scales linearly with environmental matter density, it can experience the matter density variations which makes LBL experiments one of the suitable candidate to probe its effects. We found that the effect of scalar NSI on the oscillation probabilities of LBL experiments is notable. In addition, scalar NSI can significantly effect the CP violation sensitivities as well as θ 23 octant sensitivities of these LBL experiments. Finally, we have also performed a combined sensitivity of these experiments towards constraining these scalar NSI parameters.
References
[1] O. G. Miranda and H. Nunokawa, Non standard neutrino interactions: current status and future prospects, New Journal of Physics 17 (2015) 095002.
[2] L. Wolfenstein, Neutrino Oscillations in Matter, Phys. Rev. D 17 (1978) 2369.
[3] S.-F. Ge and S. J. Parke, Scalar Nonstandard Interactions in Neutrino Oscillation, Phys. Rev. Lett. 122 (2019) 211801 [1812.08376].
[4] K. Babu, G. Chauhan and P. Bhupal Dev, Neutrino nonstandard interactions via light scalars in the Earth, Sun, supernovae, and the early Universe, Phys. Rev. D 101 (2020) 095029 [1912.13488].
[5] DUNE collaboration, Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume IV Far Detector Single-phase Technology, JINST 15 (2020) T08010 [2002.03010].
[6] Hyper-Kamiokande Proto- collaboration, Physics potential of a long-baseline neutrino oscillation experiment using a J-PARC neutrino beam and Hyper-Kamiokande, PTEP 2015 (2015) 053C02 [1502.05199].
[7] Hyper-Kamiokande collaboration, Physics potentials with the second Hyper-Kamiokande detector in Korea, PTEP 2018 (2018) 063C01 [1611.06118].
The measurement of the matter/antimatter asymmetry in the leptonic sector is one of the highest priority of the particle physics community in the next decades. The ESSnuSB collaboration proposes to design a long baseline experiment based on the European Spallation Source (ESS) at Lund in Sweden. This experiment will be able to measure the Delta_CP parameter with an unprecedent sensitivity thanks to a very intense neutrino superbeam and to the observation of the nu_mu to nu_e oscillation at the second oscillation maximum. To reach this goal, the ESS facility will be upgraded to provide an additional 5 MW proton beam by doubling the LINAC pulse frequency from 14 Hz to 28 Hz. The pulse time width will be reduced thanks to an accumulator ring from 2.86 ms to 1.3 microseconds and shared in four parts by a beam switchyard before entering into the target station. The produced neutrino superbeam will be sent to a large 538kt fiducial mass Far Detector based on Water Cherenkov technology.
In this talk, a global overview of the project with its physics potentials will be reviewed and additional possibilities offered by this high intensity facility for complementary R&D activities will also be discussed.
The nuSTORM facility will provide $\nu_e$ and $\mu_\mu$ beams from the decay of low energy muons confined within a storage ring. The central momentum of the muon beam is variable, while the momentum spread is limited. The resulting neutrino and anti-neutrino energy spectra can be precisely calculated from the muon beam parameters, and since the decay of the captured muons is well separated in time from that of their parent pions, wrong flavour neutrino backgrounds can be eliminated. nuSTORM can contribute to this effort by providing the ultimate experimental program of scattering measurements. The cross section for the scattering on complex nuclei is sensitive to energy and momentum transfers. Data with both muons and electrons in the final state are therefore very valuable. Sensitivity to physics beyond the Standard Model (BSM) is provided by nuSTORM’s unique features. This allows sensitive searches for short-baseline flavour transitions, light sterile neutrinos, nonstandard interactions, and non-unitarity. In synergy with the scattering program, new physics searches would also profit from measurements of exclusive final states, allowing for BSM neutrino interactions to be probed in neutrino-electron scattering and by searching for exotic final states. The status of the development of nuSTORM will be reviewed in the context of the renewed effort to develop high-brightness stored muon beams and as a route to very-high energy lepton-anti lepton collisions in the muon collider.
LHCb has collected the world's largest sample of charmed hadrons. This sample is used to measure $D^0 -\overline{D}^0$ mixing and to search for $C\!P$ violation in mixing and interference. New measurements from several decay modes are presented, as well as prospects for future sensitivities.