The 2023 meeting of the ECFA study on physics and experiments at e+e- Higgs/EW/Top Factories will take place in Paestum (Salerno) at the Ariston Hotel from October 11 to 13, 2023.
This meeting is intended to be an in-person meeting.
Registrations are now open under the appropriate tab - see "ECFA Workshop 2023 registration".
The registration fee, including full board and single-room accommodation, is 460 Euros. Another option foresees a shared double-room accommodation (400 Euros per person).
It is also possible to book accommodation in another hotel outside of the conference hotel/venue. In this case a registration fee of 250 Euros (including meals and coffee breaks) has to be paid. More detail can be found under the "Venue and accommodation options" tab.
Please note that, in case you plan to extend your stay, either before or after the workshop, you should arrange independently.
Please note that the early registrations deadline has expired on September 20th. An extra of 50 EUR is applied to registrations; the last term for registrations is October 4th 2023.
All payments will be handled upon registration.
Badges, participation confirmation, and payment receipts will be handed out at the registration. Internet connection will be provided, and instruction will be given upon arrival.
A bus service will be arranged by the organization at fixed times from Naples airport and train station the day before the beginning of the workshop. This can be requested in the registration form and a fee supplement is required to cover for the transport. You will be able to request the bus service on the registration page under "Bus service registration" not later than September 13th 2023.
See also the "Transportation options" tab for more information on the travelling arrangements.
The central entry point of the ECFA study is accessible through this link.
Previous editions:
You can contact the organizers at: ecfa2023@na.infn.it
New physics models such as Z' models or EWIMPs (via loop contribution) can be probed by the precise measurements of 2-fermion final states at Higgs factories and measuring dependence on fermions, polarization and angles can enhance the possibility of searching and identifying those models. We are studying qq and ll final states with 500 GeV ILC using ILD full simulation. The analysis methods as well as sensitivity to new physics will be presented in this talk.
Future Higgs Factories will allow the precise study of $e^{+}e^{-}\rightarrow q\bar{q}$ with 𝑞$q=s,c,b,t$ interactions at different energies, from the Z-pole to high energies never reached before.
In this contribution, we will discuss the experimental prospects for the measurement of differential observables in $e^{+}e^{-}\rightarrow b\bar{b}$ and $e^{+}e^{-}\rightarrow c\bar{c}$ processes at high energies, 250 and 500 GeV, with polarised beams, using full simulation samples and the reconstruction chain from the ILD concept group.
These processes call for superb primary and secondary vertex measurements, a high tracking efficiency to correctly measure the vertex charge and excellent hadron identification capabilities using $dE/dx$. This latter aspect will be discussed in detail together with its implementation within the standard flavour tagging tools developed for ILD (LCFIPlus). In addition, prospects associated with potential improvements of the $dE/dx$ reconstruction using cluster counting techniques will also be discussed. Finally, we will briefly discuss the potential of the discovery of BSM models, such as Randall-Sundrum models with warped extra dimensions, profiting from measurements of $b/c$ quark-related observables at different beam energies and polarisations.
Any new physics (NP) lying at the TeV scale must pass stringent flavor as well as collider bounds. Since the top Yukawa gives the largest quantum correction to the Higgs mass, one well-motivated expectation is TeV-scale NP dominantly coupled to the third family. This setup delivers U(2) flavor symmetries that allow one to start explaining flavor at the TeV scale, while simultaneously improving compatibility with the aforementioned bounds.
In all such models that also seek to address the hierarchy problem or the flavor puzzle, there are unavoidably new particles with sizable couplings to the Higgs. Integrating out these heavy particles generates contributions to SMEFT operators that modify EW precision observables, which are precisely measured on the Z- and W-poles. We therefore have a triad of bounds that all models of this type must pass: flavor, direct collider searches, and EW precision tests.
The SMEFT in the U(2)^5 symmetric limit contains only 124 independent operators. This makes an exhaustive phenomenological study tractable, where one can place bounds on all of these operators from each prong of the triad. I will show that while flavor bounds depend on how U(2) is broken, the U(2) symmetric limit is sufficient for EW and collider parts of the triad, which most strongly constrain the flavor conserving parts of the operators. Additionally, important effects come from resummed RGE, in particular from operators with third-family quarks running strongly into Higgs operators constrained on the Z-pole. Finally, I present projections showing how the FCC-ee Z-pole run will indirectly probe a plethora of operators via their unavoidable RG mixing into Higgs operators.
In this talk, we assess the FCC-ee reach for $Z/h\to bs, cu$ decays as a function of jet tagging performance. Recent advances in $b$, $c$, and $s$ quark tagging coupled with novel statistical analysis techniques allow the FCC-ee to place phenomenologically relevant bounds on flavor violating Higgs and $Z$ decays to quarks. We also update the SM predictions for the corresponding branching ratios, as well as the indirect constraints on the flavor violating Higgs and $Z$ couplings to quarks. Using type III two Higgs doublet model as an example of beyond the standard model physics, we show that the searches for $h\to bs, cu$ decays at FCC-ee can probe new parameter space not excluded by indirect searches.
Scalars particles lighter than 90 GeV are predicted in various new physics scenarios. They can be produced via rare decays of the Z boson, together with a photon, offering an ideal discovery channel in e+e- colliders. I will present possible search strategies at the Tera-Z run of the FCC-ee, highlighting the complementarity with other colliders (HL-LHC) and the high-energy reach of such searches. For instance, Higgs compositeness scales up to 100 TeV can be tested.
Future e$^+$e$^-$ colliders, thanks to their clean environment and triggerless operation, offer a unique opportunity to search for long-lived particles (LLPs) at sub-TeV energies. Considered in this contribution are promissing prospects for LLP searches offered by the International Large Detector (ILD), with a Time Projection Chamber (TPC) as the core of its tracking systems, providing almost continuous tracking. The ILD has been developed as a detector concept for the ILC, however, studies directed towards understanding of ILD performance at other collider concepts are ongoing.
Based on the full detector simulation, we study the possibility of reconstructing decays of both light and heavy LLPs at the ILD. For the heavy, $\mathcal{O}$(100 GeV) LLPs, we consider a challenging scenario with small mass splitting between LLP and the dark matter candidate, resulting in only a very soft displaced track pair in the final state, not pointing to the interaction point. We account for the soft beam-induced background (from measurable e$^+$e$^-$ pairs and $\gamma\gamma\to$ hadrons processes), expected to give the dominant background contribution due to a very high cross section, and show the possible means of its reduction. As the opposite extreme scenario we consider the production of light, $\mathcal{O}$(1 GeV) pseudo-scalar LLP, which decays to two highly boosted and almost colinear displaced tracks.
We also present the corresponding results for an alternative ILD design, where the TPC is replaced by a silicon tracker modified from the Compact Linear Collider detector (CLICdet) design.
We discuss the evidence for a Higgs boson with a mass of $\sim$ 95.4 GeV. We demonstrate the physics capabilities of a 250 GeV $e^+e^-$ collider in the analysis of such a light Higgs boson, making it the best physics case for such a future experiment
The physics program of the Higgs factory will focus on measurements of the 125 GeV Higgs boson, with the Higgs-strahlung process being the dominant production channel at 250 GeV. However, production of extra light scalars is still not excluded by the existing experimental data, provided their coupling to the gauge bosons is sufficiently suppressed. Fermion couplings of such a scalar could also be very different from the SM predictions leading to non-standard decay paterns.
Considered in the presented study is the sensitivity of future Higgs factory experiments to direct observation of the new light scalar production for the scalar mass range from 50 GeV to 120 GeV.
The Cool Copper Collider (C$^3$) is a proposed linear electron-positron linear collider operating at a center-of-mass energy of 250 GeV, with an upgrade to 550 GeV. A key aspect of evaluating the physics potential of any proposed Higgs factory is to quantify the effect of the various beam- and machine-induced backgrounds on the detector occupancy, and, ultimately, on the expected precision reach. In this work, we present results for the effects of incoherent $e^{+}e^{-}$ pairs and photoproduced hadrons from beamstrahlung, which were interfaced with the SiD detector concept geometry, originally developed for the International Linear Collider (ILC), using the DD4HEP toolkit. Our studies demonstrate that C$^3$ background rates are compatible with the SiD concept and enable further detector optimizations in order to maximize the precision of important measurements, e.g. the Higgs self-coupling. This highlights synergies between ILC and C$^3$ detector R&D efforts and shows the power of common software tools to enable physics studies for proposed future accelerators.
While the ionization process by charged particles (dE/dx) is commonly used for particle identification, uncertainties in total energy deposition limit particle separation capabilities. To overcome this limitation, the cluster counting technique (dN/dx) leverages the Poisson nature of primary ionization, providing a statistically robust method for inferring mass information. Simulation studies using Garfield++ and Geant4 indicate that the cluster counting technique can achieve twice the resolution of the traditional dE/dx method in helium-based drift chambers. However, in real experimental data, finding electron peaks and identifying ionization clusters is extremely challenging due to the superimposition of signals in the time domain. To address these challenges, this talk introduces cutting-edge algorithms and modern computing tools for electron peak identification and ionization cluster recognition in experimental data. The effectiveness of the algorithms is validated through three beam tests conducted at CERN/H8, involving different helium gas mixtures, varying gas gains, and various wire orientations relative to ionizing tracks. The tests employ a muon beam ranging from 1 GeV/c to 180 GeV/c, with drift tubes of different sizes and diameter sense wires. The data analysis results concerning the ascertainment of the Poisson nature of the cluster counting technique, the establishment of the most efficient cluster counting and electrons clustering algorithms among the various ones proposed, and the definition of the limiting effects for a fully efficient cluster counting, like the cluster dimensions, the space charge density around the sense wire and the dependence of the counting efficiency versus the beam particle impact parameter will be discussed.
Detector studies for future experiments rely on advanced software tools to estimate performance and optimize their design and technology choices. The Key4hep project provides a turnkey solution for the full experiment life- cycle based on established community tools. An important ingredient for the performance of future Higgs Factory experiments, is the particle flow reconstruction for optimal jet energy resolutions. The Key4hep project offers a flexible framework that allows different experiments to benefit from its synergies. One of such examples is the integration of the Pandora particle flow algorithm (PFA) for the future Noble Liquid-Argon (LAr) calorimeter foreseen for one of the FCC detectors. In this presentation, we discuss the integration of Pandora PFA into the Key4hep framework thus enabling its use across multiple detector models. The application of the Pandora PFA for the Liquid-Argon calorimeter is reviewed and the jet energy resolution obtained is evaluated.
The challenges expected for the future $e^+e^-$ colliders era are pushing to re-think the HEP computing models at many levels.
The evolution toward solutions that allow an effortless interactive analysis experience is one of the key topics foreseen by future colliders collaborations.
In this context, EDM4HEP offers a high-level model which makes it a flexible and user-friendly tool for HEP analysis workflows. To support this paradigm shift even further, a distributed infrastructure which leverages Dask to offload interactive payloads will be set up in production on INFN resources, transparently integrating Grid, clouds and possibly HPC.
It is crucial to integrate the efforts on both solutions, and an example of FCCee analysis will be presented.
The presented work will provide an overview of the main technologies involved and will describe the results of a first analysis benchmark using IDEA detector concept.
Several metrics, from event throughput to resource consumption, will be shown to assess the reliability of the workflow using resources hosted at the INFN distributed analysis facility, in the framework of the thematic spoke "Fundamental Research and Space Economy" of the National Centre on HPC, Big Data and Quantum Computing (ICSC) project.
The calorimeter systems of the detectors near future HET factories must operate in wildly different running conditions: machine backgrounds, dominant cross-sections and luminosities vary by several orders of magnitude as function of the center-or-mass energy. A determination of the expected fluxes in the calorimeters is mandatory to scale the electronics, its power dissipation and data output.
A versatile tool has been designed to build those fluxes from the detailled simualtion: energy, time and occupancy spectra, from which secondary distributions such as power, dynamic ranges in energy and time, and data fluxes can be made for a given hypothesis on the electronics. Preliminary results using this tool will be presented and discussed for the ILD detector.
The future lepton collider experiments, e.g. the Circular Electron Positron Collider (CEPC), are aimed at the precise measurement of the Standard Model (SM) particles and the exploration of new physics. This imposes stringent requirement on the jet measurement. Therefore, a novel high granularity calorimetry system has been proposed. It contains a homogeneous crystal bar electromagnetic calorimeter (ECAL) designed to achieve an optimal EM energy resolution of $2-3\% / \sqrt{E} $ with a reduced number of readout channels, and a sampling glass-scintillator hadronic calorimeter (HCAL) for better intrinsic hadronic energy resolution by increasing the density, light yield and energy sampling fraction. By combining the tracker measurement in the Particle Flow Approach (PFA), this conceptual detector design is expected to enhance the Boson Mass Resolution (BMR) from 4% in CEPC CDR to 3%. Currently full simulations using Geant4 have been conducted for both ECAL and HCAL to study the energy performance and optimize the design. In terms of hardware development, small-scale crystal ECAL modules and individual glass scintillator tiles have been fabricated and subjected to beam testing at DESY to address critical system-level issues. Furthermore, a dedicated PFA algorithm is being developed to address the challenge of severe shower overlapping and ambiguation issues in the crystal bar ECAL. In this report we will introduce the R&D progress of these crystal ECAL and glass scintillator HCAL for the CEPC, and a very preliminary expectation of their PFA and physics performance.
Calorimetry based on liquefied noble gases is a well proven technology that has been successfully applied in numerous high-energy physics experiments, such as DØ at the Tevatron, ATLAS at the LHC and NA62 at the SPS. In addition to extreme radiation hardness, noble liquid calorimeters provide excellent energy resolution, linearity, stability, uniformity and timing properties at a reasonable cost. These attributes make it a strong candidate for future particle physics experiments - in both hadron and lepton colliders. Advances in printed circuit board (PCB) technology and manufacturing processes make it possible to add high granularity to the already impressive list of benefits of noble liquid calorimeters. By using multi-layer PCB's as read-out electrodes between the noble liquid and absorbers, we can build a calorimeter with almost arbitrarily high granularity. This in turn allows for four-dimensional imaging, machine learning algorithms and particle-flow reconstruction to be fully exploited. In this talk we present the ongoing R&D work for adapting noble liquid sampling calorimetry to an electromagnetic calorimeter of a lepton collider experiment. We show studies on signal extraction and noise mitigation made with a prototype read-out electrode and compare the measurements to simulations. In addition we will present FCC software based performance studies of the calorimeter concept and conclude by discussing the next steps in the R&D project.
The technology of dual readout calorimetry, based on the simultaneous measurement of Cherenkov and scintillation light shows great potential for applications at Future Colliders.
Coupled with high-granularity designs, it allows to obtain excellent energy resolution for e.m. particles, and at the same time an event-by-event compensation of the electromagnetic and hadronic energy components.
The integration in future experiments, like the proposed IDEA detector at FCC, will be described, and the roadmap and milestones for the full RnD and implementation of such technology in the upcoming years will be discussed.
Technologically mature accelerator and detector design and a well-understood physics program make the ILC a realistic option for the realization of a future Higgs factory. Energy staged data collection, employment of beam polarization, and capability to reach a TeV center-of-mass energy, enable unique sensitivity to New Physics deviations from the Standard Model predictions in the Higgs sector and beyond.
This presentation discusses the ILC potential to measure the branching ratio for Higgs boson decays to a final state which is invisible to detectors, $H \to ZZ^* \to \nu\bar{\nu}\nu\bar{\nu}$. Using key4hep the underlying project is set up in a modular way and thus could be used, for example, to compare different collider detectors. Technical aspects as well as the first preliminary results will be presented.
The reconstruction of heavy flavour jets will play an important role at future e+e- Higgs factories: $H\to b\bar{b}$ is the most frequent decay mode of the SM Higgs, and $H\to c\bar{c}$ is particularly challenging to measure at the LHC. Also exotic scalars could decay into these modes, and b- and/or c-jets occur frequently in top quark as well as Z and W boson decays. While the reconstruction of light-flavoured jets has been a classic performance benchmark for Higgs factory detectors, the special challenges in precise reconstruction of heavy flavour jets have only been studied since a few years. In particular, semi-leptonic b- and c-decays suffer from the undetected four-momentum of the neutrino. For instance in $H\to b\bar{b}$, about 2/3 of the events contain at least one semi-leptonic b- or c-decay, which leads to a significantly degraded jet energy and di-jet invariant mass reconstruction.
In this contribution we will explore in how far the capabilities of the proposed Higgs factory detectors can be exploited to improve the reconstruction of $H\to b\bar{b}$.
The presented algorithms comprise the identification of charged leptons in jets as means to identify semi-leptonic decays, a newly developed algorithm to infer the missing neutrino momentum (up to a sign ambiguity) from the visible part of the decay, the vertex position and the mass of the B-hadron, and a first-principle estimate of the 4-momentum covariance matrices of jets based on their individual composition as a key ingredient for a kinematic fit for solving the last ambiguity in the neutrino momentum. Using as example the ILD detector concept, we will present a striking performance improvement in $H\to b\bar{b}$ reconstruction in $ZH$ and $ZZ$ events based on the combined power of these algorithms, which are now all available in Key4HEP. We will also discuss the role of the individual detector capabilities, and give an outlook on how these tools could be applied to the reconstruction of other physics processes.
The talk presents the feasibility of a novel $b$-hemisphere tagger and its application at the Tera-$Z$ programme at FCC-ee for the measurement of the partial width $R_b$ and the $b$-quark forward-backward asymmetry $A_{\text{FB}}^b$. By exclusively reconstructing $b$-hadrons in the hemispheres, the major source of systematic uncertainties originating from light- and $c$-physics is eliminated. A discussion of the only remaining uncertainties is given and their impact on the measurement is estimated.
The differential and total cross section or related observables of difermion production in high e+e- collisions bear a considerable potential for the discovery of the onset of new physics as the centre-of-mass energy increases. Most of these measurements are only possible due to beam polarisation. Earlier measurements have reported on the determination of the differential cross sections of b, c (and top) quarks. This contribution will extend these studies to u, d and s quarks at \sqrt(s)=250 GeV and thus constitutes an unprecedented study.
Progress on topics related to understanding the potential for precision measurements of the W mass will be reported building on previous work. This includes recent work on WW event selection issues relevant to continuum and threshold, further work on assessing the performance of lepton-based W mass estimators, and developments associated with constrained kinematic fits.
The majority of Monte-Carlo (MC) simulation campaigns for future Higgs factories has so far been based on the leading-order (LO) matrix elements provided by Whizard 1.95, followed by parton shower and hadronization in Pythia6, using the tune of the OPAL experiment at LEP. In this contribution, we test the next-to-leading-order (NLO) mode of Whizard. NLO events of key processes ($e^+e^-\to q\bar{q}$, $\mu^+\mu^-b\bar{b}$, multi-jets...) are generated by POWHEG matching, with parton shower and hadronization provided by Pythia8. The NLO effect on hadron multiplicities and event shape variables of jets will be discussed at hadron-level. After passing the events through the full detector simulation of the International Large Detector concept as an example for a ParticleFlow-optimised detector, the jet energy resolution and typical kinematic quantities are compared between NLO and LO at reconstruction level. A first assessment of which physics prospects of future $e^+e^-$ should be studied with NLO MC in the future will be given.
At all center-of-mass energies of a future high-energy e$^{+}$e$^{-}$ collider precise determination of the absolute integrated luminosity underpins the physics program. It is especially critical for measuring the number of light neutrino generations ($N_{\nu}$). In this contribution we will emphasize the prospects and investigate the potential of using the pure QED process, e$^{+}$e$^{-}\to\gamma\gamma$, to target absolute precisions at the 0.01% level. This photon-pair process can be experimentally and theoretically more favorable than the small-angle Bhabha scattering based measurements typically considered for absolute integrated luminosity. We investigate consequences for detector design and detector performance (both required and currently achievable) in terms of photon measurements and rejection of Bhabha electrons.
In this talk we explore the interpretation of neutrino masses as radiatively generated, leveraging the forthcoming precision at the Future Circular Collider. We highlight the discernibility of Higgs-strahlung signals and radiative corrections as key tools for revealing New Physics. In particular, we consider the role of extra fermions in altering the signature proper of the Inert Doublet Model. We illustrate the required theoretical effort and detail the renormalization procedure adopted. The case of the Scotogenic model is investigated, emphasizing the implications of both normal and inverted hierarchies as well as the interplay with flavor violating observables.
A discovery of Lepton Number Violating (LNV) processes at future colliders would be a fascinating signature of new physics beyond the Standard Model (SM). It would prove that the light neutrinos have Majorana-type masses, and could allow a deep insight into the neutrino mass generation mechanism. We discuss how observable LNV can originate from collider testable low scale type I neutrino mass generation, where extra SM singlet fermions (i.e. heavy neutral leptons) are introduced, via the phenomenon of heavy neutrino-antineutrino oscillations. We report on recent progress in understanding these oscillations and argue that their effects have to be be included in order to correctly evaluate the prospects for discovering LNV.
Type I seesaw models generating small masses for the observed neutrinos predict not only heavy neutrinos but also the presence of lepton number violating (LNV) processes. After the discovery of these heavy neutrinos, it becomes essential to examine the amount of LNV in order to shed light on the corresponding neutrino mass-generating mechanism. We discuss the potential of future lepton colliders to not only discover LNV but also measure its size. Additionally, we comment on simple benchmark models able to capture the relevant physics with a minimal set of parameters.
Neutrinos are the most elusive particles known. Heavier sterile neutrinos mixing with the Standard Model partners might solve the mystery of the baryon asymmetry of the universe and take part in the mass generation mechanism for the light neutrinos. Future lepton colliders, including e+e− Higgs factories, as well as multi-TeV electron and muon machines, will provide the farthest search reach for such neutrinos in the mass range from above the Z pole into the multi-TeV regime. In our contribution, we will discuss the future lepton collider search potential for such particles in their prompt decays and present a new approach to use kinematic variables to constrain the nature of heavy neutrinos, probing their Majorana or Dirac character.
To study the physics potential of the detector concepts proposed for FCC-ee, a detailed simulation of detectors responses to visible particles is required. An essential component of the simulation process is the description of the detector components in terms of geometry, materials and sensitive parts. The future collider community agreed on using the DD4hep framework for their detector description. This framework has the potential to provide a flexible plug and play approach which allows us to easily study various full detector configurations made of different combinations of sub-detectors. In addition to that, the community agreed on implementing reconstruction algorithms in the Key4hep framework which greatly enhance their interoperability across sub-detectors from different facilities. This talk will report on the recent progress made in implementing FCC-ee detector concepts together with their reconstruction in the Key4hep framework.
Recently, a concept for a Hybrid Asymmetric Linear Higgs Factory (HALHF) has been proposed, where a center-of-mass energy of 250 GeV is reached by colliding a plasma-wakefield accelerated electron beam of 500 GeV with a conventionally accelerated positron beam of about 30 GeV. While clearly facing R&D challenges, this concept bears the potential to be significantly cheaper than any other proposed Higgs Factory, comparable in cost e.g. to the EIC. The asymmetric design changes the requirements on the detector at such a facility, which needs to be adapted to forward-boosted event topologies as well as different distributions of beam-beam backgrounds. This contribution will give a first assessment of the impact of the accelerator design on the physics prospects in terms of some flagship measurements of Higgs factories, and how a detector would need to be adjusted from a typical symmetric Higgs factory design.
The Cool Copper Collider (C^3) is a proposed linear electron-positron linear collider operating at a center-of-mass energy of 250 GeV, upgradable to 550 GeV. A key aspect of evaluating the physics potential of any proposed Higgs factory is to quantify the effect of the various beam- and machine-induced backgrounds on the detector occupancy, and, ultimately, on the expected precision reach. In particular, we are building on the dedicated simulations achieved thus far for incoherent electron and hadron production, as well as accelerator muon backgrounds to make detailed simulations of the C3 bunch structure. C3 has a bunch separation of 5.25ns in trains of 133 bunches, so out-of-time pileup is pertinent to understanding the cleanliness of the experimental environment and electronics design. In this study we demonstrate the progress we’ve made, using common Linear Collider software integrated in key4hep, on overlaying out-of-time pileup coming from incoherent electron pairs, and we evaluate the impact on the pixel occupancy per bunch coming from this effect looking in various time windows around the central bunch crossing.
Along the path defined by the European Strategy for Particle Physics, an electron-positron Higgs factory is the highest priority next collider.
The FCC program at CERN combine in the same 100km infrastructure a high luminosity Higgs and Electroweak factory e collider, followed by a 100 TeV hadron collider. The IDEA project (Innovative Detector for an Electron–positron Accelerator), as a proposal for an experiment along the electron-positron collider, includes an ultralight drift chamber as the main tracking device designed to provide efficient tracking, high-precision momentum measurement and excellent particle identification. One of the most relevant features of this drift chamber, fundamental for precision electroweak physics at the Z pole and flavor physics, is the high transparency, in terms of radiation
lengths, obtained by using a novel approach adopted for the wiring and assembly procedures.
Particle identification capabilities are also particularly relevant for heavy flavor tagging and are reached by using a cluster counting technique, expected to provide a two-times better particle separation with respect to the traditional method
based on energy loss per unit length. An overview of the status of the IDEA drift chamber project is provided in this talk, together with the last updates on mechanical simulation studies.
The IDEA drift chamber is designed to provide efficient tracking, a high-precision momentum measurement and excellent particle identification by exploiting the application of the cluster counting technique. To investigate the potential of the cluster counting techniques on physics events, a simulation of the ionization cluster generation is needed, therefore we developed algorithms that can use the energy deposit information provided by the Geant4 toolkit to reproduce, in a fast and convenient way, the cluster number and cluster size distributions. The results obtained confirm that the cluster counting technique allows to reach a resolution two times better than the traditional dE/dx method. In this talk, we will present these cutting-edge algorithms, which play a vital role in identifying electron peaks and discerning ionization clusters. These algorithms have been successfully implemented in the simulation of the IDEA drift chamber, accurately reproducing the distributions of cluster numbers and cluster sizes. Furthermore, we will highlight the integration of these algorithms into the key4hep ecosystem, emphasizing their compatibility and synergistic benefits. By showcasing the capabilities of these algorithms and their seamless integration, we can gain valuable insights into the immense potential of the cluster counting technique in enhancing the performance of the IDEA drift chamber.
The IDEA experiment muon systems (pre-shower and external tracking) require a large number of u-RWELL detectors. To keep the cost of the entire system affordable, an optimization of the readout electronics channel is needed. For this purpose, resolution studies as a function of the readout segmentation pitch and of the DLC resistivity have been performed.
From the 2021 beam test results, more focused for the pre-shower studies, a spatial resolution of around 100 um has been obtained with 400 um strips pitch and about 80 MOhm/sq. DLC resistivity.
In the beam test campaign held in October 2022, mainly pointing to studies for the external trackers, the comparison among the response of the detectors, with a resistivity ranging between 40 and 80 MOhm/square, suggested the possibility to manufacture detector with millimetric strip pitch: indeed a spatial resolution of 500 um has been achieved with a 1.6 mm strips pitch equipping the chambers with analog Front-End Electronics (APV25).
During the beam test in June 2023, two different 2D-readout concepts have been studied: the TOP READOUT and the Capacitive Sharing (CS). In the first case the second coordinate is provided by the segmentation of the amplification stage (0.8 mm pitch). In the second one, we have two planes with orthogonal strips (1.2 mm pitch) and the signal is induced by capacitive coupling. Preliminary results show spatial resolution around 300 um for the TOP READOUT version and 150 um for the CS.
The construction of all the prototypes, even with different geometries (from 10 x 10 cm2 up to 40 x 40 cm2) has been shared between the CERN EP-DT-EF workshop and the ELTOS S.p.A company, both providers of the core of the detector: the u-RWELL_PCB. The main purpose to carry on this task sharing is to maintain the cost effectiveness of the micro-Resistive WELL also for the large production required for the IDEA experiment (about 500 detectors for the pre-shower and more than 7000 for the muon tracking system).
Most analyses of triple Higgs couplings (THCs) focused on the SM-like coupling, $\lambda_{hhh}$, assuming the SM value, $\kappa_\lambda := \lambda_{hhh}/\lambda_{hhh}^{\rm SM} = 1$.
We will discuss two BSM physics cases:
1) $\kappa_\lambda \neq 1$, which is suggested by the requirements for baryogenesis (to explain the the matter-antimatter asymmetry of the universe.
2) BSM THCs, such as $\lambda_{hhH}$, where $H$ represents a heavy BSM Higgs boson.
We will show how ILC/CLIC can analyze these two scenarios, far beyond the
capabilities of the HL-LHC.
Measuring the Higgs self-coupling is a key target for future $e^{+}e^{-}$ colliders and can be accessed through double Higgs production. An important question is how the precision of this measurement improves with higher center-of-mass collision energy. In this work, we study the ZHH process at center-of-mass energies of 500, 550, and 600 GeV, simulated with the ILD detector concept from the International Linear Collider (ILC) using the DD4HEP toolkit. The accurate reconstruction of ZHH events under realistic detector conditions requires the use of advanced algorithms to fully utilize the initial-state kinematics, including e.g. kinematic fitting and matrix element-inferred likelihoods with Graph Neural Networks. This is the first study of the dependence of the self-coupling precision on the choice of center-of-mass energy and it demonstrates the importance of optimizing the center-of-mass energy for increased sensitivity on the self-coupling. The requirements that the Higgs self-coupling measurement puts on the choice of center-of-mass energy will be evaluated as this is important for shaping the landscape of future colliders such as ILC or Cool Copper Collider (C$^3$). It also highlights the reusability of the ILC detector concept and Key4HEP-based analyses for new collider concepts.
The trilinear Higgs coupling provides a unique opportunity to probe the structure of the Higgs sector, study the nature of the electroweak phase transition, and search for indirect signs of Beyond-the-Standard Model (BSM) Physics. Recently, it was also shown that confronting the prediction for the trilinear Higgs coupling with the latest experimental bounds opens a powerful new way to probe possible effects of BSM Physics arising from extended Higgs sectors, going beyond existing experimental and theoretical constraints
In this talk, I will present the new public tool anyH3, which provides predictions for the trilinear Higgs coupling to full one-loop order within arbitrary renormalisable theories. This program allows computing one-, two-, and three-point functions at one loop in an automated way, and moreover it offers a high level of flexibility in the choice between pre- or user-defined renormalisation conditions. I will review the main elements of the calculation and demonstrate features of anyH3. Finally, I will discuss concrete applications of this tool and give an update on extensions currently in progress.
We analyze the electroweak (EW) sector of the MSSM in view of the experimental results for the anomalous magnetic moment of the muon, $(g-2)_\mu$, the Dark Matter (DM) relic density, the DM direct detection (DD) bounds and in particular the LHC searches for such EW particles. We demonstrate the complementarity of future DD experiments and future high-energy $e^+e^-$ colliders. We show that these two types of experiments will either find evidence for BSM particles or rule out the MSSM as an explanation for $(g-2)_\mu$ and DM.
An effective field theory (EFT) approach is used to investigate naturalness of the Higgs sector at scales below $M \sim {\cal O}(10)$ TeV. In particular, we obtain the leading 1-loop EFT contributions to the Higgs mass with a Wilsonian-like hard cutoff $\Lambda$ (i.e., $\Lambda < M$), and determine the constraints on the corresponding operator coefficients for these effects to alleviate the little hierarchy problem up to the scale of the effective action $\Lambda$; a condition we denote by ``EFT-naturalness''. We also discuss the specific types of physics that can lead to “EFT-naturalness” and their potential signatures at a future $e^+e^-$ collider, e.g., in the production of multiple vector-bosons and/or Higgs-bosons.
Some say SUSY is dead
, because LHC has not discovered it yet. But is this
really true? It turns out that the story is more subtle. SUSY can be 'just
around the corner', even if no signs of it has been found and a closer
look is needed to quantify the impact of LHC limits and their implications
for future colliders. In this contribution, a study of prospects for SUSY
based on scanning the relevant parameter space of (weak-scale) SUSY
parameters, is presented.
I concentrate on the properties most relevant to evaluate the experimental
prospects: mass differences, lifetimes and decay-modes. The observations are
then confronted with estimated experimental capabilities, including -
importantly - the detail of simulation these estimates are based upon.
I have mainly considered what can be expected from LHC and HL-LHC, where it
turns that large swaths of SUSY parameter space will be quite hard to access.
For e+e- colliders, on the other hand, the situation is simple:
at such colliders, SUSY will be either discovered or excluded almost to
the kinematic limit.
The direct pair-production of the tau-lepton superpartner, stau, is one of the
most interesting channels to search for SUSY. First of all the stau is with high
probability the lightest of the scalar leptons. Secondly the signature of stau
pair production signal events is one of the most difficult ones, yielding the
'worst' and thus most general scenario for the searches.
The most model-independent limits on the stau mass comes from the LEP experiments. They
exclude a stau with mass below 26.3 GeV for any mixing and any difference between stau and
neutralino masses larger than the tau mass.
The LHC exclusion reach extends to higher masses for large mass differences, but under strong
model assumptions.
Future electron-positron colliders are ideally suited for stau searches: they
will feature increased luminosity and centre-of-mass energy, and improved accelerator,
detector and analysis technologies with respect to previous electron-positron colliders.
With respect to hadron colliders, they will profit from a cleaner environment, from the
initial state being known, and from trigger-less operation of the detectors.
In this contribution, the prospects for discovering stau-pair
production at future e+e- Higgs factories and the resulting detector
requirements will be discussed.
For detector-level simulations, the study takes the ILD detector concept
and ILC parameters at 500 GeV as example. It includes all SM
backgrounds, as well as beam induced backgrounds, as overlay-on-physics
and - for the first time - overlay-only events, and considers the
worst-case scenario for the stau-mixing. It shows that with the chosen
accelerator and detector conditions, SUSY will be discovered if the
NLSP mass is up to just a few GeV below the kinematic limit of the
collider.
Based on these results, expectations for other center-of-mass energies,
luminosities, beam polarisations, beam background and detector
conditions will derived. Among the detector performance criteria,in
particular the role of the hermeticity of the detector, of the tracking
acceptance and of the ability to operate trigger-less will be discussed
and put into perspective of the experimental environment expected at
different Higgs factories.
Long-range angular particle correlations may serve as manifestations of physics beyond the Standard Model, such as Hidden Valley (HV) scenarios. We focus on QCD-like hidden sectors in which the production of HV matter on top of the QCD partonic cascade would enhance and enlarge azimuthal correlations of final-state particles. We study the observability of such signals at future $e^+e^-$ colliders, which will provide a much cleaner environment with respect to the LHC. Specifically, the presence of ridge structures in the two-particle correlation function would indicate the possible existence of New Physics.
In a class of theories, dark matter is explained by postulating the existence of a 'dark sector',
which interacts gravitationally with ordinary matter. If this dark sector contains a U(1) symmetry,
and a corresponding 'dark' photon ($A_{D}$) , it is natural to expect that this particle with kineticly mix
with the ordinary photon, and hence become a 'portal' through which the dark sector can be studied.
The strength of the mixing is given by a mixing parameter $(\epsilon)$. This
same parameter governs both the production and the decay of the $A_{D}$ back to SM
particles, and for values of $\epsilon$ not already excluded, the signal would be
a quite small, and quite narrow resonance: If $(\epsilon)$ is large enough to
yield a detectable signal, its decay width will be smaller than the detector resolution, but so large
that the decay back to SM particles is prompt. For masses of the dark photon above the reach of
BelleII, future high energy e+e- colliders are ideal for searches for such a signal, due to the
low and well-known backgrounds, and the excellent momentum resolution and equally
excellent track-finding efficiency of the detectors at such colliders.
This contribution will discuss a study investigating the dependency of the limit on the mixing
parameter and the mass of the $A_{D}$ using the $A_{D}\rightarrow\mu^{+}\mu^{-}$ decay mode in
the presence of standard model background, using fully simulated signal and background events in
the ILD detector at the ILC Higgs factory. In addition, a more general discussion about the capabilities
expected for generic detectors at e+e- colliders operating at other energies will be given.
Several indications for neutral scalars are observed at the LHC. One of them, a broad resonance peaked at about 650 GeV which we call H(650), was obtained by an outsider combining published histograms from ATLAS and CMS on ZZ →4ℓ searches, and this combination shows a local significance close to 4 s.d. Since then, CMS has reported two other indications at the same mass, with similar local significances: H →WW →ℓνℓν and H→bbh125 where h125 →. ATLAS has completed its analysis of ZZ→4ℓ from which we infer an indication for H(650) with 3.5 s.d. significance. Assuming that the mass is already known from the former set, and combining these three results, one gets a global statistical significance of about 6 s.d. H(650) has a coupling to WW similar to h(125) and therefore we argue that a sum rule (SR) required by unitarity for WW implies that there should be a compensating effect from a doubly charged scalar H++, with a large coupling to W+W+. We therefore predict that this mode should become visible through the vector boson fusion process W+W+->H++, naturally provided by LHC. A recent indication for H++(450)->W+W+ from ATLAS allows a model independent interpretation of this result through the SR constraint which gives BR(H++->W+W+)~10%, implying the occurrence of additional modes H’+W+ and H’+H’+ from one or several light H’+ with masses below mH++ - mW or MH++/2, that is mH’+ < 370 GeV or 225 GeV. A similar analysis is provided for H+(375)->ZW, indicated by ATLAS and CMS. Both channels suggest a scalar field content similar to the Georgi Machacek model with triplets, at variance with the models usually considered.
The precise reconstruction of electrons is an important ingredient for the proposed physics program at future Higgs factories (HF). It becomes especially important in $m_W$ and TGC measurements in the $e\nu W$ final state. These measurements were identified as two of the high-priority focus topics by the WG1 of the ECFA HF study.
The track reconstruction for electrons is particularly challenging due to their increased material interaction probability.
We propose to build a dedicated electron reconstruction algorithm for Key4hep based on state-of-the-art methods from LHC experiments. In particular, a Gaussian sum filter (GSF) based track fit using ACTS and an advanced matching of bremsstrahlung photons will be investigated. This algorithm will be evaluated in a detector-agnostic Key4hep $e\nu W$ benchmark analysis.
In this talk, we present the first results of this work.
Efficient heavy flavour tagging is essential for reaching the physics goals at future e+e- -collider-experiments, like for example the precise measurement of Higgs boson properties in the decay channels to bottom or charm quarks.
In this talk the application of the CMS DeepJet tagger to the ILD detector concept for the ILC is presented. The performance of the tagger is compared to the current state-of-the-art in ILD. Moreover, the integration in Key4hep is discussed.
Particle flow and flavor tagging are the key algorithms determining physics performance at reconstruction for Higgs factory detectors. Particle flow is the reconstruction of individual particles inside jets, which requires precise track-cluster matching in addition to clustering of calorimeter hits. We are implementing a track-cluster matching algorithm on top of GravNet-based calorimeter clustering algorithm developed in the context of CMS HGCAL reconstruction. The first statistical analysis with ILD full simulation will be presented.
We are also working on advanced flavor tagging based on GNN such as ParticleNet or ParticleTransformer whose are developed at LHC experiments. Since FCCee collegues reported much better performance (but with fast simulation) than existing software for Higgs factories (LCFIPlus) with those algorithms, we would like to confirm it with ILD full simulation. The first results will be presented in this talk as well.
The tracking system of the IDEA detector concept consists of different silicon detector subsystems: a vertex detector, an inner tracker and a silicon wrapper between the drift chamber and the calorimeters. Various technologies are being explored and optimized, depending on the physics and operating conditions of teh systems. The high-granularity and low-power ARCADIA prototypes have recently demonstrated excellent performance which will make them suitable for the high demanding vertex region. Multi-chips systems of ATLASPIX3 DMAPS have been tested with electron beams and quad-module prototypes have been realized, targeting the large area tracking system. For the silicon wrapper, an alternative solution using resistive LGADs, providing both precision tracking and TOF capabilities, to improve the particle identification performance, is also explored.
The detectors at future e+e- linear colliders will need unprecedented precision on Higgs physics measurements. These ambitious physics goals translate into very challenging detector requirements on tracking and calorimetry. High precision and low mass trackers, as well as highly granular calorimeters, will be critical for the success of the physics program. To develop the next generation of ultralight trackers, a further reduction of dead material can be obtained by employing Monolithic Active Pixel Sensor (MAPS) technology. In MAPS, sensors and readout circuitry are combined in the same pixels and can be fabricated with commercial CMOS processes. Currently MAPS are widely used in different applications in High Energy Physics (HEP), in astronomy and in photonics. This technology has been utilized for the Inner Tracking System Upgrade (ITS2) of the ALICE experiment at the LHC characterized by a very low power consumption and O(µs) timing capabilities.
Future Colliders can benefit from fast detectors with O(ns) timing capabilities. This is feasible at the cost of a relatively high power consumption that could not be compatible with large area constraints. Today some commercial imaging technologies offer the possibility to produce large stitched sensors (with a rectangle area ~30 cm × 10 cm). Such large sensors are very interesting from a physics point of view, but they are very challenging from an engineering point of view.
The first part of this talk will discuss the limits and potentials of MAPS technology for detectors at future colliders.
NAPA-p1 is a prototype Monolithic Active Pixel Sensor designed in 65 nm CMOS imaging technology, developed in collaboration with CERN to meet requirements for future e+e- colliders. The prototype has dimensions of 1.5 mm × 1.5 mm with a pixel pitch of 25 μm. A discussion will be presented on future strategies to allow the scalability of this design into a large-scale sensor of 10 cm × 10 cm
The performance of monolithic CMOS pixel sensors depends on their fabrication process and especially the feature size which directly drives the pixel size. A consortium led by the CERN EP R&D program, the ALICE experiment and various European projects (AIDAinnova, EURIZON) is investigating the benefits of a 65-nm CMOS imager process to design a new generation of pixel sensors. These developments target a first application for an upgraded version of the inner layers (ITS3) of the ALICE experiments and foster further studies for detector including those for future e+e- colliders that are still currently unmatched by any technology.
Two fabrications of a variety of prototype sensors already took place, in 2020 and 2022. This contribution reports on the characterization of the first version of some of them, the CE-65 sensor family. They include analogue output matrices featuring 2048 (or 1536) pixels with either 15-µm or 25-µm pixels. Three versions of the sensing node were fabricated in order to modify the charge sharing between pixels. Sensors were irradiated to non-ionizing fluences between $10^{13}$ and $10^{16}$ $n_{eq}/cm^2$ as well to 100 and 500 Mrad ionizing radiations.
Illumination with 55Fe source allowed to estimate the equivalent collection node capacitance and its pixel-to-pixel fluctuation, as well as the leakage current before and after irradiation. Non-irradiated sensors were tested in a 10-GeV electron beam to study in detail the charge sharing among pixels and extract the sensor detection efficiencies as well as their position resolutions. The evolution of the latter with digitization strategies, simulated from the data, was also investigated in order to explore the potential of pixels with binary or few bits output, designed in this 65-nm process.
To realistically estimate the performance of future Higgs factory experiments, detailed studies based on full simulation and reconstruction are needed. The CLD detector model for FCCee is fully implemented in key4hep-based full simulation and comes with a complete reconstruction chain to perform studies from background estimates to sophisticated physics analyses. While a detailed performance estimate was published in the past, the design of the FCCee accelerator has been refined since, and certain detector adaptations have become necessary.
In this talk, we present the updated detector design and performance studies. Additionally, we provide an overview of the remaining work in the context of the FCC feasibility study and our plans for further optimization.
We present the latest development for the FCC-ee interaction region and machine-detector interface (MDI). It represents a major challenge for the FCC-ee colliders, which has to achieve extremely high luminosity over a wide range of centre-of-mass energies. FCC-ee will host two or four high-precision experiments. The machine parameters have to be well controlled and the design of the machine-detector-interface has to be carefully optimized. In particular, the complex final focus hosted in the detector region has to be carefully designed, with compensating solenoids and the first final focus quadrupole inside the detector; the impact of beam losses and of any type of synchrotron radiation generated in the interaction region, including beamstrahlung, have to be simulated in detail. We discuss mitigation measures and the expected impact of beam losses and radiation on the detector background. We also report the progress of the mechanical model of the interaction region layout, including the engineering design of the central beampipe, the vertex detector which has been recently designed and integrated with the machine components, and the luminosity calorimeter.
The FCC-ee aims at unprecedented luminosities, to be able to study the Standard Model of particle physics with extreme precision. The vertex detector, located close to the beam pipe, plays a paramount importance in the precise reconstruction of the trajectories of the charged tracks.
In this contribution we will present the design of the IDEA vertex detector, as a result aiming to fulfil the requirements coming from the physics as well as the constraints from the machine elements. The vertex detector, comprising three inner vertex barrel layers, two outer barrel and six discs, covers an angular acceptance of |cos(θ)|<0.99, between 13.7 mm and 31.5 cm radius, and it is designed around a lightweight mechanical structure supporting MAPS Silicon detectors. The vertex detector is fully integrated with the machine elements thanks to a large lightweight cylindrical support structure, that also eases its integration.
We will present the detailed structural elements, discuss the status of the R&D and its performance.